2023-07-11 18:16:40,441 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b 2023-07-11 18:16:40,464 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-11 18:16:40,487 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-11 18:16:40,488 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/cluster_66fe2a0c-baee-fc58-4092-355e2e16c678, deleteOnExit=true 2023-07-11 18:16:40,488 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-11 18:16:40,488 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/test.cache.data in system properties and HBase conf 2023-07-11 18:16:40,489 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/hadoop.tmp.dir in system properties and HBase conf 2023-07-11 18:16:40,489 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/hadoop.log.dir in system properties and HBase conf 2023-07-11 18:16:40,490 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-11 18:16:40,490 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-11 18:16:40,490 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-11 18:16:40,616 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-11 18:16:41,022 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-11 18:16:41,027 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-11 18:16:41,028 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-11 18:16:41,028 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-11 18:16:41,028 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-11 18:16:41,029 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-11 18:16:41,029 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-11 18:16:41,029 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-11 18:16:41,030 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-11 18:16:41,030 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-11 18:16:41,031 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/nfs.dump.dir in system properties and HBase conf 2023-07-11 18:16:41,031 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/java.io.tmpdir in system properties and HBase conf 2023-07-11 18:16:41,032 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-11 18:16:41,032 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-11 18:16:41,032 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-11 18:16:41,587 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-11 18:16:41,592 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-11 18:16:41,902 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-11 18:16:42,091 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-11 18:16:42,108 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 18:16:42,145 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 18:16:42,182 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/java.io.tmpdir/Jetty_localhost_39865_hdfs____h2gfp7/webapp 2023-07-11 18:16:42,329 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39865 2023-07-11 18:16:42,339 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-11 18:16:42,339 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-11 18:16:42,804 WARN [Listener at localhost/40365] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 18:16:42,880 WARN [Listener at localhost/40365] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-11 18:16:42,900 WARN [Listener at localhost/40365] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 18:16:42,907 INFO [Listener at localhost/40365] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 18:16:42,913 INFO [Listener at localhost/40365] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/java.io.tmpdir/Jetty_localhost_44375_datanode____.smj9vs/webapp 2023-07-11 18:16:43,043 INFO [Listener at localhost/40365] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44375 2023-07-11 18:16:43,550 WARN [Listener at localhost/46385] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 18:16:43,585 WARN [Listener at localhost/46385] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-11 18:16:43,589 WARN [Listener at localhost/46385] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 18:16:43,591 INFO [Listener at localhost/46385] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 18:16:43,602 INFO [Listener at localhost/46385] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/java.io.tmpdir/Jetty_localhost_40095_datanode____.fc93gp/webapp 2023-07-11 18:16:43,737 INFO [Listener at localhost/46385] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40095 2023-07-11 18:16:43,751 WARN [Listener at localhost/44521] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 18:16:43,826 WARN [Listener at localhost/44521] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-11 18:16:43,830 WARN [Listener at localhost/44521] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 18:16:43,833 INFO [Listener at localhost/44521] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 18:16:43,856 INFO [Listener at localhost/44521] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/java.io.tmpdir/Jetty_localhost_36873_datanode____.4k4v60/webapp 2023-07-11 18:16:44,015 INFO [Listener at localhost/44521] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36873 2023-07-11 18:16:44,038 WARN [Listener at localhost/35107] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 18:16:44,278 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbfeabd3939bb854c: Processing first storage report for DS-16d7a147-10c7-4220-b527-dbfb950941dd from datanode 0eb601d2-e270-46a9-89e4-2f65bf74c392 2023-07-11 18:16:44,280 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbfeabd3939bb854c: from storage DS-16d7a147-10c7-4220-b527-dbfb950941dd node DatanodeRegistration(127.0.0.1:39467, datanodeUuid=0eb601d2-e270-46a9-89e4-2f65bf74c392, infoPort=34073, infoSecurePort=0, ipcPort=46385, storageInfo=lv=-57;cid=testClusterID;nsid=1773800878;c=1689099401665), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-11 18:16:44,281 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdc945a299c6bafb3: Processing first storage report for DS-8dff4402-f4a4-4098-b391-d4e5069af3ae from datanode 8ed6104c-4154-4c33-89e4-0b32f1a01437 2023-07-11 18:16:44,281 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdc945a299c6bafb3: from storage DS-8dff4402-f4a4-4098-b391-d4e5069af3ae node DatanodeRegistration(127.0.0.1:41511, datanodeUuid=8ed6104c-4154-4c33-89e4-0b32f1a01437, infoPort=36847, infoSecurePort=0, ipcPort=44521, storageInfo=lv=-57;cid=testClusterID;nsid=1773800878;c=1689099401665), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 18:16:44,281 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1dceb232fd496fc3: Processing first storage report for DS-4c181186-22ac-44a8-b1a5-3c334dc774a7 from datanode ee9d9c6a-f45b-4904-8c1c-d550ecb59c3b 2023-07-11 18:16:44,281 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1dceb232fd496fc3: from storage DS-4c181186-22ac-44a8-b1a5-3c334dc774a7 node DatanodeRegistration(127.0.0.1:33363, datanodeUuid=ee9d9c6a-f45b-4904-8c1c-d550ecb59c3b, infoPort=45981, infoSecurePort=0, ipcPort=35107, storageInfo=lv=-57;cid=testClusterID;nsid=1773800878;c=1689099401665), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 18:16:44,282 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbfeabd3939bb854c: Processing first storage report for DS-20aa1112-371a-44f3-988f-be2a273ba532 from datanode 0eb601d2-e270-46a9-89e4-2f65bf74c392 2023-07-11 18:16:44,282 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbfeabd3939bb854c: from storage DS-20aa1112-371a-44f3-988f-be2a273ba532 node DatanodeRegistration(127.0.0.1:39467, datanodeUuid=0eb601d2-e270-46a9-89e4-2f65bf74c392, infoPort=34073, infoSecurePort=0, ipcPort=46385, storageInfo=lv=-57;cid=testClusterID;nsid=1773800878;c=1689099401665), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 18:16:44,282 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdc945a299c6bafb3: Processing first storage report for DS-60c2bc5f-d7b8-4498-bc73-8c40e8e67fbe from datanode 8ed6104c-4154-4c33-89e4-0b32f1a01437 2023-07-11 18:16:44,282 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdc945a299c6bafb3: from storage DS-60c2bc5f-d7b8-4498-bc73-8c40e8e67fbe node DatanodeRegistration(127.0.0.1:41511, datanodeUuid=8ed6104c-4154-4c33-89e4-0b32f1a01437, infoPort=36847, infoSecurePort=0, ipcPort=44521, storageInfo=lv=-57;cid=testClusterID;nsid=1773800878;c=1689099401665), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 18:16:44,283 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1dceb232fd496fc3: Processing first storage report for DS-df46d61f-18ce-4870-8656-a058714d387a from datanode ee9d9c6a-f45b-4904-8c1c-d550ecb59c3b 2023-07-11 18:16:44,283 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1dceb232fd496fc3: from storage DS-df46d61f-18ce-4870-8656-a058714d387a node DatanodeRegistration(127.0.0.1:33363, datanodeUuid=ee9d9c6a-f45b-4904-8c1c-d550ecb59c3b, infoPort=45981, infoSecurePort=0, ipcPort=35107, storageInfo=lv=-57;cid=testClusterID;nsid=1773800878;c=1689099401665), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 18:16:44,557 DEBUG [Listener at localhost/35107] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b 2023-07-11 18:16:44,639 INFO [Listener at localhost/35107] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/cluster_66fe2a0c-baee-fc58-4092-355e2e16c678/zookeeper_0, clientPort=58592, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/cluster_66fe2a0c-baee-fc58-4092-355e2e16c678/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/cluster_66fe2a0c-baee-fc58-4092-355e2e16c678/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-11 18:16:44,657 INFO [Listener at localhost/35107] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=58592 2023-07-11 18:16:44,665 INFO [Listener at localhost/35107] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:16:44,668 INFO [Listener at localhost/35107] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:16:45,354 INFO [Listener at localhost/35107] util.FSUtils(471): Created version file at hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582 with version=8 2023-07-11 18:16:45,355 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/hbase-staging 2023-07-11 18:16:45,366 DEBUG [Listener at localhost/35107] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-11 18:16:45,367 DEBUG [Listener at localhost/35107] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-11 18:16:45,367 DEBUG [Listener at localhost/35107] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-11 18:16:45,367 DEBUG [Listener at localhost/35107] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-11 18:16:45,731 INFO [Listener at localhost/35107] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-11 18:16:46,302 INFO [Listener at localhost/35107] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-11 18:16:46,342 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:16:46,343 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 18:16:46,343 INFO [Listener at localhost/35107] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 18:16:46,343 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:16:46,343 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 18:16:46,494 INFO [Listener at localhost/35107] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 18:16:46,571 DEBUG [Listener at localhost/35107] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-11 18:16:46,667 INFO [Listener at localhost/35107] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45397 2023-07-11 18:16:46,678 INFO [Listener at localhost/35107] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:16:46,680 INFO [Listener at localhost/35107] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:16:46,701 INFO [Listener at localhost/35107] zookeeper.RecoverableZooKeeper(93): Process identifier=master:45397 connecting to ZooKeeper ensemble=127.0.0.1:58592 2023-07-11 18:16:46,750 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:453970x0, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 18:16:46,753 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:45397-0x101559a084d0000 connected 2023-07-11 18:16:46,785 DEBUG [Listener at localhost/35107] zookeeper.ZKUtil(164): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 18:16:46,786 DEBUG [Listener at localhost/35107] zookeeper.ZKUtil(164): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:16:46,790 DEBUG [Listener at localhost/35107] zookeeper.ZKUtil(164): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 18:16:46,800 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45397 2023-07-11 18:16:46,802 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45397 2023-07-11 18:16:46,803 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45397 2023-07-11 18:16:46,810 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45397 2023-07-11 18:16:46,811 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45397 2023-07-11 18:16:46,846 INFO [Listener at localhost/35107] log.Log(170): Logging initialized @7195ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-11 18:16:46,979 INFO [Listener at localhost/35107] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 18:16:46,980 INFO [Listener at localhost/35107] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 18:16:46,980 INFO [Listener at localhost/35107] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 18:16:46,982 INFO [Listener at localhost/35107] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-11 18:16:46,982 INFO [Listener at localhost/35107] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 18:16:46,982 INFO [Listener at localhost/35107] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 18:16:46,986 INFO [Listener at localhost/35107] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 18:16:47,043 INFO [Listener at localhost/35107] http.HttpServer(1146): Jetty bound to port 44785 2023-07-11 18:16:47,044 INFO [Listener at localhost/35107] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 18:16:47,073 INFO [Listener at localhost/35107] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:16:47,076 INFO [Listener at localhost/35107] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@966e0ef{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/hadoop.log.dir/,AVAILABLE} 2023-07-11 18:16:47,077 INFO [Listener at localhost/35107] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:16:47,077 INFO [Listener at localhost/35107] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5310d071{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 18:16:47,253 INFO [Listener at localhost/35107] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 18:16:47,266 INFO [Listener at localhost/35107] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 18:16:47,266 INFO [Listener at localhost/35107] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 18:16:47,268 INFO [Listener at localhost/35107] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-11 18:16:47,274 INFO [Listener at localhost/35107] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:16:47,303 INFO [Listener at localhost/35107] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@9ca6b1f{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/java.io.tmpdir/jetty-0_0_0_0-44785-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6359376069324912346/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-11 18:16:47,318 INFO [Listener at localhost/35107] server.AbstractConnector(333): Started ServerConnector@668fa014{HTTP/1.1, (http/1.1)}{0.0.0.0:44785} 2023-07-11 18:16:47,318 INFO [Listener at localhost/35107] server.Server(415): Started @7668ms 2023-07-11 18:16:47,322 INFO [Listener at localhost/35107] master.HMaster(444): hbase.rootdir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582, hbase.cluster.distributed=false 2023-07-11 18:16:47,429 INFO [Listener at localhost/35107] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-11 18:16:47,429 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:16:47,430 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 18:16:47,430 INFO [Listener at localhost/35107] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 18:16:47,430 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:16:47,430 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 18:16:47,436 INFO [Listener at localhost/35107] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 18:16:47,439 INFO [Listener at localhost/35107] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45471 2023-07-11 18:16:47,441 INFO [Listener at localhost/35107] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 18:16:47,449 DEBUG [Listener at localhost/35107] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 18:16:47,450 INFO [Listener at localhost/35107] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:16:47,452 INFO [Listener at localhost/35107] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:16:47,453 INFO [Listener at localhost/35107] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45471 connecting to ZooKeeper ensemble=127.0.0.1:58592 2023-07-11 18:16:47,461 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:454710x0, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 18:16:47,463 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45471-0x101559a084d0001 connected 2023-07-11 18:16:47,463 DEBUG [Listener at localhost/35107] zookeeper.ZKUtil(164): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 18:16:47,465 DEBUG [Listener at localhost/35107] zookeeper.ZKUtil(164): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:16:47,466 DEBUG [Listener at localhost/35107] zookeeper.ZKUtil(164): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 18:16:47,466 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45471 2023-07-11 18:16:47,466 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45471 2023-07-11 18:16:47,467 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45471 2023-07-11 18:16:47,467 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45471 2023-07-11 18:16:47,467 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45471 2023-07-11 18:16:47,470 INFO [Listener at localhost/35107] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 18:16:47,470 INFO [Listener at localhost/35107] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 18:16:47,470 INFO [Listener at localhost/35107] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 18:16:47,471 INFO [Listener at localhost/35107] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 18:16:47,471 INFO [Listener at localhost/35107] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 18:16:47,471 INFO [Listener at localhost/35107] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 18:16:47,472 INFO [Listener at localhost/35107] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 18:16:47,473 INFO [Listener at localhost/35107] http.HttpServer(1146): Jetty bound to port 40633 2023-07-11 18:16:47,474 INFO [Listener at localhost/35107] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 18:16:47,476 INFO [Listener at localhost/35107] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:16:47,476 INFO [Listener at localhost/35107] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1eb685a1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/hadoop.log.dir/,AVAILABLE} 2023-07-11 18:16:47,477 INFO [Listener at localhost/35107] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:16:47,477 INFO [Listener at localhost/35107] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@49a4b2bd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 18:16:47,622 INFO [Listener at localhost/35107] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 18:16:47,624 INFO [Listener at localhost/35107] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 18:16:47,624 INFO [Listener at localhost/35107] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 18:16:47,625 INFO [Listener at localhost/35107] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-11 18:16:47,626 INFO [Listener at localhost/35107] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:16:47,631 INFO [Listener at localhost/35107] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6e1656d5{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/java.io.tmpdir/jetty-0_0_0_0-40633-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2707064374317803033/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:16:47,632 INFO [Listener at localhost/35107] server.AbstractConnector(333): Started ServerConnector@5a7c8feb{HTTP/1.1, (http/1.1)}{0.0.0.0:40633} 2023-07-11 18:16:47,632 INFO [Listener at localhost/35107] server.Server(415): Started @7982ms 2023-07-11 18:16:47,651 INFO [Listener at localhost/35107] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-11 18:16:47,651 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:16:47,652 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 18:16:47,652 INFO [Listener at localhost/35107] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 18:16:47,652 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:16:47,653 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 18:16:47,653 INFO [Listener at localhost/35107] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 18:16:47,655 INFO [Listener at localhost/35107] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37773 2023-07-11 18:16:47,656 INFO [Listener at localhost/35107] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 18:16:47,661 DEBUG [Listener at localhost/35107] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 18:16:47,662 INFO [Listener at localhost/35107] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:16:47,664 INFO [Listener at localhost/35107] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:16:47,665 INFO [Listener at localhost/35107] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37773 connecting to ZooKeeper ensemble=127.0.0.1:58592 2023-07-11 18:16:47,670 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:377730x0, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 18:16:47,672 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37773-0x101559a084d0002 connected 2023-07-11 18:16:47,672 DEBUG [Listener at localhost/35107] zookeeper.ZKUtil(164): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 18:16:47,673 DEBUG [Listener at localhost/35107] zookeeper.ZKUtil(164): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:16:47,674 DEBUG [Listener at localhost/35107] zookeeper.ZKUtil(164): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 18:16:47,675 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37773 2023-07-11 18:16:47,675 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37773 2023-07-11 18:16:47,678 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37773 2023-07-11 18:16:47,679 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37773 2023-07-11 18:16:47,680 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37773 2023-07-11 18:16:47,682 INFO [Listener at localhost/35107] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 18:16:47,682 INFO [Listener at localhost/35107] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 18:16:47,683 INFO [Listener at localhost/35107] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 18:16:47,683 INFO [Listener at localhost/35107] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 18:16:47,684 INFO [Listener at localhost/35107] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 18:16:47,684 INFO [Listener at localhost/35107] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 18:16:47,684 INFO [Listener at localhost/35107] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 18:16:47,685 INFO [Listener at localhost/35107] http.HttpServer(1146): Jetty bound to port 36851 2023-07-11 18:16:47,685 INFO [Listener at localhost/35107] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 18:16:47,694 INFO [Listener at localhost/35107] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:16:47,695 INFO [Listener at localhost/35107] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7fe3f683{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/hadoop.log.dir/,AVAILABLE} 2023-07-11 18:16:47,695 INFO [Listener at localhost/35107] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:16:47,695 INFO [Listener at localhost/35107] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@29e64012{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 18:16:47,849 INFO [Listener at localhost/35107] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 18:16:47,850 INFO [Listener at localhost/35107] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 18:16:47,850 INFO [Listener at localhost/35107] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 18:16:47,850 INFO [Listener at localhost/35107] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-11 18:16:47,851 INFO [Listener at localhost/35107] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:16:47,852 INFO [Listener at localhost/35107] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@324c91bf{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/java.io.tmpdir/jetty-0_0_0_0-36851-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3115675725043999828/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:16:47,854 INFO [Listener at localhost/35107] server.AbstractConnector(333): Started ServerConnector@c8fca5f{HTTP/1.1, (http/1.1)}{0.0.0.0:36851} 2023-07-11 18:16:47,854 INFO [Listener at localhost/35107] server.Server(415): Started @8204ms 2023-07-11 18:16:47,866 INFO [Listener at localhost/35107] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-11 18:16:47,866 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:16:47,866 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 18:16:47,866 INFO [Listener at localhost/35107] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 18:16:47,866 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:16:47,866 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 18:16:47,867 INFO [Listener at localhost/35107] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 18:16:47,868 INFO [Listener at localhost/35107] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45821 2023-07-11 18:16:47,868 INFO [Listener at localhost/35107] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 18:16:47,869 DEBUG [Listener at localhost/35107] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 18:16:47,871 INFO [Listener at localhost/35107] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:16:47,872 INFO [Listener at localhost/35107] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:16:47,874 INFO [Listener at localhost/35107] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45821 connecting to ZooKeeper ensemble=127.0.0.1:58592 2023-07-11 18:16:47,878 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:458210x0, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 18:16:47,879 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45821-0x101559a084d0003 connected 2023-07-11 18:16:47,879 DEBUG [Listener at localhost/35107] zookeeper.ZKUtil(164): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 18:16:47,880 DEBUG [Listener at localhost/35107] zookeeper.ZKUtil(164): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:16:47,880 DEBUG [Listener at localhost/35107] zookeeper.ZKUtil(164): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 18:16:47,881 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45821 2023-07-11 18:16:47,881 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45821 2023-07-11 18:16:47,881 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45821 2023-07-11 18:16:47,882 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45821 2023-07-11 18:16:47,882 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45821 2023-07-11 18:16:47,885 INFO [Listener at localhost/35107] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 18:16:47,885 INFO [Listener at localhost/35107] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 18:16:47,885 INFO [Listener at localhost/35107] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 18:16:47,886 INFO [Listener at localhost/35107] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 18:16:47,886 INFO [Listener at localhost/35107] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 18:16:47,886 INFO [Listener at localhost/35107] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 18:16:47,887 INFO [Listener at localhost/35107] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 18:16:47,887 INFO [Listener at localhost/35107] http.HttpServer(1146): Jetty bound to port 35095 2023-07-11 18:16:47,888 INFO [Listener at localhost/35107] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 18:16:47,889 INFO [Listener at localhost/35107] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:16:47,889 INFO [Listener at localhost/35107] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@505a01fa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/hadoop.log.dir/,AVAILABLE} 2023-07-11 18:16:47,890 INFO [Listener at localhost/35107] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:16:47,890 INFO [Listener at localhost/35107] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3743c5eb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 18:16:48,014 INFO [Listener at localhost/35107] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 18:16:48,018 INFO [Listener at localhost/35107] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 18:16:48,018 INFO [Listener at localhost/35107] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 18:16:48,019 INFO [Listener at localhost/35107] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-11 18:16:48,020 INFO [Listener at localhost/35107] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:16:48,022 INFO [Listener at localhost/35107] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3d078368{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/java.io.tmpdir/jetty-0_0_0_0-35095-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5435115629036575963/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:16:48,023 INFO [Listener at localhost/35107] server.AbstractConnector(333): Started ServerConnector@6b30b1b6{HTTP/1.1, (http/1.1)}{0.0.0.0:35095} 2023-07-11 18:16:48,023 INFO [Listener at localhost/35107] server.Server(415): Started @8373ms 2023-07-11 18:16:48,039 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 18:16:48,061 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@18359baf{HTTP/1.1, (http/1.1)}{0.0.0.0:37107} 2023-07-11 18:16:48,061 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8411ms 2023-07-11 18:16:48,061 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,45397,1689099405546 2023-07-11 18:16:48,076 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-11 18:16:48,078 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,45397,1689099405546 2023-07-11 18:16:48,105 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 18:16:48,105 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 18:16:48,105 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 18:16:48,105 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:16:48,105 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 18:16:48,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-11 18:16:48,109 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,45397,1689099405546 from backup master directory 2023-07-11 18:16:48,109 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-11 18:16:48,114 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,45397,1689099405546 2023-07-11 18:16:48,114 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-11 18:16:48,115 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 18:16:48,115 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,45397,1689099405546 2023-07-11 18:16:48,118 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-11 18:16:48,120 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-11 18:16:48,258 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/hbase.id with ID: dd54002b-0292-4dbd-8286-dd3ac8efa65a 2023-07-11 18:16:48,309 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:16:48,327 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:16:48,383 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x26cd00a3 to 127.0.0.1:58592 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:16:48,412 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@28e5e282, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:16:48,440 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 18:16:48,442 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-11 18:16:48,464 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-11 18:16:48,464 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-11 18:16:48,466 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-11 18:16:48,471 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-11 18:16:48,472 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 18:16:48,518 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/MasterData/data/master/store-tmp 2023-07-11 18:16:48,580 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:48,580 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-11 18:16:48,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:16:48,580 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:16:48,580 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-11 18:16:48,580 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:16:48,580 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:16:48,581 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-11 18:16:48,583 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/MasterData/WALs/jenkins-hbase4.apache.org,45397,1689099405546 2023-07-11 18:16:48,610 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45397%2C1689099405546, suffix=, logDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/MasterData/WALs/jenkins-hbase4.apache.org,45397,1689099405546, archiveDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/MasterData/oldWALs, maxLogs=10 2023-07-11 18:16:48,693 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33363,DS-4c181186-22ac-44a8-b1a5-3c334dc774a7,DISK] 2023-07-11 18:16:48,693 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41511,DS-8dff4402-f4a4-4098-b391-d4e5069af3ae,DISK] 2023-07-11 18:16:48,693 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39467,DS-16d7a147-10c7-4220-b527-dbfb950941dd,DISK] 2023-07-11 18:16:48,701 DEBUG [RS-EventLoopGroup-5-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-11 18:16:48,770 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/MasterData/WALs/jenkins-hbase4.apache.org,45397,1689099405546/jenkins-hbase4.apache.org%2C45397%2C1689099405546.1689099408625 2023-07-11 18:16:48,771 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33363,DS-4c181186-22ac-44a8-b1a5-3c334dc774a7,DISK], DatanodeInfoWithStorage[127.0.0.1:41511,DS-8dff4402-f4a4-4098-b391-d4e5069af3ae,DISK], DatanodeInfoWithStorage[127.0.0.1:39467,DS-16d7a147-10c7-4220-b527-dbfb950941dd,DISK]] 2023-07-11 18:16:48,772 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:16:48,772 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:48,777 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-11 18:16:48,779 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-11 18:16:48,849 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-11 18:16:48,856 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-11 18:16:48,889 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-11 18:16:48,903 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:48,909 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-11 18:16:48,911 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-11 18:16:48,931 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-11 18:16:48,936 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:16:48,937 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11253934400, jitterRate=0.048104315996170044}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:16:48,937 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-11 18:16:48,939 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-11 18:16:48,968 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-11 18:16:48,968 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-11 18:16:48,971 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-11 18:16:48,974 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-11 18:16:49,014 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 40 msec 2023-07-11 18:16:49,015 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-11 18:16:49,044 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-11 18:16:49,050 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-11 18:16:49,058 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-11 18:16:49,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-11 18:16:49,069 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-11 18:16:49,072 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:16:49,073 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-11 18:16:49,074 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-11 18:16:49,095 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-11 18:16:49,101 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 18:16:49,101 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 18:16:49,101 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 18:16:49,101 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 18:16:49,102 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:16:49,103 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,45397,1689099405546, sessionid=0x101559a084d0000, setting cluster-up flag (Was=false) 2023-07-11 18:16:49,126 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:16:49,132 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-11 18:16:49,133 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45397,1689099405546 2023-07-11 18:16:49,138 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:16:49,144 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-11 18:16:49,145 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45397,1689099405546 2023-07-11 18:16:49,148 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.hbase-snapshot/.tmp 2023-07-11 18:16:49,239 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-11 18:16:49,243 INFO [RS:1;jenkins-hbase4:37773] regionserver.HRegionServer(951): ClusterId : dd54002b-0292-4dbd-8286-dd3ac8efa65a 2023-07-11 18:16:49,243 INFO [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer(951): ClusterId : dd54002b-0292-4dbd-8286-dd3ac8efa65a 2023-07-11 18:16:49,244 INFO [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(951): ClusterId : dd54002b-0292-4dbd-8286-dd3ac8efa65a 2023-07-11 18:16:49,252 DEBUG [RS:0;jenkins-hbase4:45471] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 18:16:49,252 DEBUG [RS:1;jenkins-hbase4:37773] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 18:16:49,252 DEBUG [RS:2;jenkins-hbase4:45821] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 18:16:49,261 DEBUG [RS:1;jenkins-hbase4:37773] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 18:16:49,261 DEBUG [RS:0;jenkins-hbase4:45471] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 18:16:49,261 DEBUG [RS:2;jenkins-hbase4:45821] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 18:16:49,262 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-11 18:16:49,262 DEBUG [RS:0;jenkins-hbase4:45471] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 18:16:49,261 DEBUG [RS:1;jenkins-hbase4:37773] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 18:16:49,262 DEBUG [RS:2;jenkins-hbase4:45821] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 18:16:49,264 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45397,1689099405546] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 18:16:49,266 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-11 18:16:49,267 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-11 18:16:49,267 DEBUG [RS:0;jenkins-hbase4:45471] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 18:16:49,267 DEBUG [RS:1;jenkins-hbase4:37773] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 18:16:49,267 DEBUG [RS:2;jenkins-hbase4:45821] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 18:16:49,269 DEBUG [RS:0;jenkins-hbase4:45471] zookeeper.ReadOnlyZKClient(139): Connect 0x27d8117e to 127.0.0.1:58592 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:16:49,275 DEBUG [RS:1;jenkins-hbase4:37773] zookeeper.ReadOnlyZKClient(139): Connect 0x531542d5 to 127.0.0.1:58592 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:16:49,275 DEBUG [RS:2;jenkins-hbase4:45821] zookeeper.ReadOnlyZKClient(139): Connect 0x1039ce8b to 127.0.0.1:58592 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:16:49,289 DEBUG [RS:1;jenkins-hbase4:37773] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@54b33a63, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:16:49,289 DEBUG [RS:0;jenkins-hbase4:45471] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7e3ba645, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:16:49,291 DEBUG [RS:1;jenkins-hbase4:37773] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a3df1ef, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-11 18:16:49,291 DEBUG [RS:0;jenkins-hbase4:45471] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@a85cab4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-11 18:16:49,297 DEBUG [RS:2;jenkins-hbase4:45821] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1efe32fc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:16:49,298 DEBUG [RS:2;jenkins-hbase4:45821] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5fc338c7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-11 18:16:49,323 DEBUG [RS:1;jenkins-hbase4:37773] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:37773 2023-07-11 18:16:49,323 DEBUG [RS:0;jenkins-hbase4:45471] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:45471 2023-07-11 18:16:49,325 DEBUG [RS:2;jenkins-hbase4:45821] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:45821 2023-07-11 18:16:49,338 INFO [RS:1;jenkins-hbase4:37773] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 18:16:49,338 INFO [RS:1;jenkins-hbase4:37773] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 18:16:49,339 DEBUG [RS:1;jenkins-hbase4:37773] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 18:16:49,343 INFO [RS:1;jenkins-hbase4:37773] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45397,1689099405546 with isa=jenkins-hbase4.apache.org/172.31.14.131:37773, startcode=1689099407650 2023-07-11 18:16:49,346 INFO [RS:0;jenkins-hbase4:45471] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 18:16:49,347 INFO [RS:0;jenkins-hbase4:45471] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 18:16:49,347 DEBUG [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 18:16:49,347 INFO [RS:2;jenkins-hbase4:45821] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 18:16:49,348 INFO [RS:2;jenkins-hbase4:45821] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 18:16:49,348 DEBUG [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 18:16:49,349 INFO [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45397,1689099405546 with isa=jenkins-hbase4.apache.org/172.31.14.131:45471, startcode=1689099407428 2023-07-11 18:16:49,350 INFO [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45397,1689099405546 with isa=jenkins-hbase4.apache.org/172.31.14.131:45821, startcode=1689099407865 2023-07-11 18:16:49,367 DEBUG [RS:2;jenkins-hbase4:45821] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 18:16:49,367 DEBUG [RS:1;jenkins-hbase4:37773] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 18:16:49,367 DEBUG [RS:0;jenkins-hbase4:45471] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 18:16:49,403 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-11 18:16:49,439 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38599, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 18:16:49,441 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39669, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 18:16:49,439 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58077, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 18:16:49,452 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:16:49,456 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-11 18:16:49,462 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:16:49,462 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-11 18:16:49,463 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:16:49,465 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-11 18:16:49,466 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-11 18:16:49,468 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-11 18:16:49,468 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-11 18:16:49,468 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-11 18:16:49,469 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-11 18:16:49,469 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-11 18:16:49,469 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,469 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-11 18:16:49,469 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,479 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689099439478 2023-07-11 18:16:49,482 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-11 18:16:49,484 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-11 18:16:49,485 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-11 18:16:49,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-11 18:16:49,488 DEBUG [RS:1;jenkins-hbase4:37773] regionserver.HRegionServer(2830): Master is not running yet 2023-07-11 18:16:49,488 DEBUG [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(2830): Master is not running yet 2023-07-11 18:16:49,488 WARN [RS:1;jenkins-hbase4:37773] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-11 18:16:49,488 DEBUG [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer(2830): Master is not running yet 2023-07-11 18:16:49,488 WARN [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-11 18:16:49,488 WARN [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-11 18:16:49,488 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-11 18:16:49,497 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-11 18:16:49,497 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-11 18:16:49,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-11 18:16:49,498 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-11 18:16:49,501 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,503 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-11 18:16:49,504 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-11 18:16:49,505 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-11 18:16:49,508 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-11 18:16:49,509 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-11 18:16:49,512 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689099409511,5,FailOnTimeoutGroup] 2023-07-11 18:16:49,512 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689099409512,5,FailOnTimeoutGroup] 2023-07-11 18:16:49,512 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,512 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-11 18:16:49,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,515 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,579 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-11 18:16:49,581 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-11 18:16:49,581 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582 2023-07-11 18:16:49,589 INFO [RS:1;jenkins-hbase4:37773] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45397,1689099405546 with isa=jenkins-hbase4.apache.org/172.31.14.131:37773, startcode=1689099407650 2023-07-11 18:16:49,589 INFO [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45397,1689099405546 with isa=jenkins-hbase4.apache.org/172.31.14.131:45821, startcode=1689099407865 2023-07-11 18:16:49,589 INFO [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45397,1689099405546 with isa=jenkins-hbase4.apache.org/172.31.14.131:45471, startcode=1689099407428 2023-07-11 18:16:49,599 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45397] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:49,605 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45397,1689099405546] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 18:16:49,606 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45397,1689099405546] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-11 18:16:49,612 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:49,613 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45397] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:49,613 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45397,1689099405546] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 18:16:49,613 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45397,1689099405546] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-11 18:16:49,614 DEBUG [RS:1;jenkins-hbase4:37773] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582 2023-07-11 18:16:49,614 DEBUG [RS:1;jenkins-hbase4:37773] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40365 2023-07-11 18:16:49,614 DEBUG [RS:1;jenkins-hbase4:37773] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44785 2023-07-11 18:16:49,615 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45397] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:49,615 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45397,1689099405546] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 18:16:49,615 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45397,1689099405546] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-11 18:16:49,617 DEBUG [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582 2023-07-11 18:16:49,617 DEBUG [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40365 2023-07-11 18:16:49,618 DEBUG [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44785 2023-07-11 18:16:49,618 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-11 18:16:49,617 DEBUG [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582 2023-07-11 18:16:49,618 DEBUG [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40365 2023-07-11 18:16:49,618 DEBUG [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44785 2023-07-11 18:16:49,621 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/info 2023-07-11 18:16:49,622 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-11 18:16:49,623 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:49,624 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-11 18:16:49,627 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/rep_barrier 2023-07-11 18:16:49,627 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:16:49,628 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-11 18:16:49,629 DEBUG [RS:1;jenkins-hbase4:37773] zookeeper.ZKUtil(162): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:49,629 WARN [RS:1;jenkins-hbase4:37773] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 18:16:49,629 INFO [RS:1;jenkins-hbase4:37773] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 18:16:49,629 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:49,630 DEBUG [RS:2;jenkins-hbase4:45821] zookeeper.ZKUtil(162): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:49,630 DEBUG [RS:1;jenkins-hbase4:37773] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/WALs/jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:49,630 DEBUG [RS:0;jenkins-hbase4:45471] zookeeper.ZKUtil(162): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:49,631 WARN [RS:0;jenkins-hbase4:45471] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 18:16:49,630 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-11 18:16:49,630 WARN [RS:2;jenkins-hbase4:45821] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 18:16:49,632 INFO [RS:2;jenkins-hbase4:45821] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 18:16:49,632 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45471,1689099407428] 2023-07-11 18:16:49,632 DEBUG [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/WALs/jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:49,631 INFO [RS:0;jenkins-hbase4:45471] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 18:16:49,632 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45821,1689099407865] 2023-07-11 18:16:49,633 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37773,1689099407650] 2023-07-11 18:16:49,633 DEBUG [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/WALs/jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:49,642 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/table 2023-07-11 18:16:49,644 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-11 18:16:49,650 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:49,653 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740 2023-07-11 18:16:49,654 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740 2023-07-11 18:16:49,659 DEBUG [RS:1;jenkins-hbase4:37773] zookeeper.ZKUtil(162): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:49,660 DEBUG [RS:1;jenkins-hbase4:37773] zookeeper.ZKUtil(162): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:49,660 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-11 18:16:49,660 DEBUG [RS:1;jenkins-hbase4:37773] zookeeper.ZKUtil(162): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:49,663 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-11 18:16:49,664 DEBUG [RS:0;jenkins-hbase4:45471] zookeeper.ZKUtil(162): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:49,665 DEBUG [RS:2;jenkins-hbase4:45821] zookeeper.ZKUtil(162): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:49,666 DEBUG [RS:0;jenkins-hbase4:45471] zookeeper.ZKUtil(162): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:49,667 DEBUG [RS:2;jenkins-hbase4:45821] zookeeper.ZKUtil(162): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:49,667 DEBUG [RS:0;jenkins-hbase4:45471] zookeeper.ZKUtil(162): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:49,667 DEBUG [RS:2;jenkins-hbase4:45821] zookeeper.ZKUtil(162): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:49,670 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:16:49,671 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10741467520, jitterRate=3.771185874938965E-4}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-11 18:16:49,671 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-11 18:16:49,672 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-11 18:16:49,672 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-11 18:16:49,672 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-11 18:16:49,672 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-11 18:16:49,672 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-11 18:16:49,674 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-11 18:16:49,674 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-11 18:16:49,679 DEBUG [RS:0;jenkins-hbase4:45471] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 18:16:49,680 DEBUG [RS:2;jenkins-hbase4:45821] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 18:16:49,679 DEBUG [RS:1;jenkins-hbase4:37773] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 18:16:49,682 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-11 18:16:49,682 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-11 18:16:49,693 INFO [RS:1;jenkins-hbase4:37773] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 18:16:49,693 INFO [RS:2;jenkins-hbase4:45821] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 18:16:49,693 INFO [RS:0;jenkins-hbase4:45471] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 18:16:49,694 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-11 18:16:49,708 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-11 18:16:49,713 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-11 18:16:49,722 INFO [RS:1;jenkins-hbase4:37773] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 18:16:49,722 INFO [RS:0;jenkins-hbase4:45471] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 18:16:49,722 INFO [RS:2;jenkins-hbase4:45821] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 18:16:49,729 INFO [RS:2;jenkins-hbase4:45821] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 18:16:49,729 INFO [RS:0;jenkins-hbase4:45471] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 18:16:49,729 INFO [RS:1;jenkins-hbase4:37773] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 18:16:49,730 INFO [RS:0;jenkins-hbase4:45471] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,730 INFO [RS:2;jenkins-hbase4:45821] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,730 INFO [RS:1;jenkins-hbase4:37773] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,731 INFO [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 18:16:49,733 INFO [RS:1;jenkins-hbase4:37773] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 18:16:49,733 INFO [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 18:16:49,742 INFO [RS:0;jenkins-hbase4:45471] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,742 INFO [RS:1;jenkins-hbase4:37773] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,742 INFO [RS:2;jenkins-hbase4:45821] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,742 DEBUG [RS:0;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,742 DEBUG [RS:1;jenkins-hbase4:37773] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,743 DEBUG [RS:0;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,743 DEBUG [RS:2;jenkins-hbase4:45821] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,743 DEBUG [RS:0;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,743 DEBUG [RS:2;jenkins-hbase4:45821] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,743 DEBUG [RS:1;jenkins-hbase4:37773] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,743 DEBUG [RS:2;jenkins-hbase4:45821] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,743 DEBUG [RS:1;jenkins-hbase4:37773] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,743 DEBUG [RS:2;jenkins-hbase4:45821] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,743 DEBUG [RS:1;jenkins-hbase4:37773] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,743 DEBUG [RS:2;jenkins-hbase4:45821] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,744 DEBUG [RS:1;jenkins-hbase4:37773] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,744 DEBUG [RS:2;jenkins-hbase4:45821] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-11 18:16:49,744 DEBUG [RS:1;jenkins-hbase4:37773] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-11 18:16:49,744 DEBUG [RS:2;jenkins-hbase4:45821] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,744 DEBUG [RS:1;jenkins-hbase4:37773] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,744 DEBUG [RS:2;jenkins-hbase4:45821] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,744 DEBUG [RS:1;jenkins-hbase4:37773] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,744 DEBUG [RS:2;jenkins-hbase4:45821] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,744 DEBUG [RS:1;jenkins-hbase4:37773] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,744 DEBUG [RS:1;jenkins-hbase4:37773] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,743 DEBUG [RS:0;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,744 DEBUG [RS:2;jenkins-hbase4:45821] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,745 DEBUG [RS:0;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,745 DEBUG [RS:0;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-11 18:16:49,745 DEBUG [RS:0;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,746 DEBUG [RS:0;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,746 DEBUG [RS:0;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,746 DEBUG [RS:0;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:49,752 INFO [RS:2;jenkins-hbase4:45821] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,752 INFO [RS:2;jenkins-hbase4:45821] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,753 INFO [RS:0;jenkins-hbase4:45471] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,754 INFO [RS:1;jenkins-hbase4:37773] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,754 INFO [RS:2;jenkins-hbase4:45821] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,754 INFO [RS:1;jenkins-hbase4:37773] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,754 INFO [RS:0;jenkins-hbase4:45471] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,754 INFO [RS:1;jenkins-hbase4:37773] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,754 INFO [RS:0;jenkins-hbase4:45471] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,774 INFO [RS:1;jenkins-hbase4:37773] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 18:16:49,774 INFO [RS:2;jenkins-hbase4:45821] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 18:16:49,776 INFO [RS:0;jenkins-hbase4:45471] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 18:16:49,779 INFO [RS:1;jenkins-hbase4:37773] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37773,1689099407650-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,779 INFO [RS:2;jenkins-hbase4:45821] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45821,1689099407865-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,779 INFO [RS:0;jenkins-hbase4:45471] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45471,1689099407428-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:49,842 INFO [RS:1;jenkins-hbase4:37773] regionserver.Replication(203): jenkins-hbase4.apache.org,37773,1689099407650 started 2023-07-11 18:16:49,842 INFO [RS:0;jenkins-hbase4:45471] regionserver.Replication(203): jenkins-hbase4.apache.org,45471,1689099407428 started 2023-07-11 18:16:49,842 INFO [RS:1;jenkins-hbase4:37773] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37773,1689099407650, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37773, sessionid=0x101559a084d0002 2023-07-11 18:16:49,842 INFO [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45471,1689099407428, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45471, sessionid=0x101559a084d0001 2023-07-11 18:16:49,843 INFO [RS:2;jenkins-hbase4:45821] regionserver.Replication(203): jenkins-hbase4.apache.org,45821,1689099407865 started 2023-07-11 18:16:49,843 DEBUG [RS:1;jenkins-hbase4:37773] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 18:16:49,843 INFO [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45821,1689099407865, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45821, sessionid=0x101559a084d0003 2023-07-11 18:16:49,843 DEBUG [RS:0;jenkins-hbase4:45471] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 18:16:49,843 DEBUG [RS:2;jenkins-hbase4:45821] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 18:16:49,845 DEBUG [RS:2;jenkins-hbase4:45821] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:49,843 DEBUG [RS:1;jenkins-hbase4:37773] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:49,846 DEBUG [RS:2;jenkins-hbase4:45821] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45821,1689099407865' 2023-07-11 18:16:49,843 DEBUG [RS:0;jenkins-hbase4:45471] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:49,846 DEBUG [RS:2;jenkins-hbase4:45821] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 18:16:49,846 DEBUG [RS:1;jenkins-hbase4:37773] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37773,1689099407650' 2023-07-11 18:16:49,847 DEBUG [RS:1;jenkins-hbase4:37773] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 18:16:49,846 DEBUG [RS:0;jenkins-hbase4:45471] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45471,1689099407428' 2023-07-11 18:16:49,847 DEBUG [RS:0;jenkins-hbase4:45471] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 18:16:49,847 DEBUG [RS:2;jenkins-hbase4:45821] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 18:16:49,848 DEBUG [RS:1;jenkins-hbase4:37773] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 18:16:49,848 DEBUG [RS:0;jenkins-hbase4:45471] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 18:16:49,848 DEBUG [RS:2;jenkins-hbase4:45821] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 18:16:49,848 DEBUG [RS:2;jenkins-hbase4:45821] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 18:16:49,848 DEBUG [RS:2;jenkins-hbase4:45821] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:49,848 DEBUG [RS:2;jenkins-hbase4:45821] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45821,1689099407865' 2023-07-11 18:16:49,848 DEBUG [RS:2;jenkins-hbase4:45821] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 18:16:49,848 DEBUG [RS:0;jenkins-hbase4:45471] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 18:16:49,848 DEBUG [RS:1;jenkins-hbase4:37773] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 18:16:49,848 DEBUG [RS:0;jenkins-hbase4:45471] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 18:16:49,848 DEBUG [RS:1;jenkins-hbase4:37773] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 18:16:49,849 DEBUG [RS:0;jenkins-hbase4:45471] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:49,849 DEBUG [RS:2;jenkins-hbase4:45821] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 18:16:49,851 DEBUG [RS:2;jenkins-hbase4:45821] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 18:16:49,851 INFO [RS:2;jenkins-hbase4:45821] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-11 18:16:49,851 INFO [RS:2;jenkins-hbase4:45821] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-11 18:16:49,850 DEBUG [RS:0;jenkins-hbase4:45471] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45471,1689099407428' 2023-07-11 18:16:49,853 DEBUG [RS:0;jenkins-hbase4:45471] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 18:16:49,850 DEBUG [RS:1;jenkins-hbase4:37773] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:49,853 DEBUG [RS:1;jenkins-hbase4:37773] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37773,1689099407650' 2023-07-11 18:16:49,853 DEBUG [RS:1;jenkins-hbase4:37773] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 18:16:49,853 DEBUG [RS:1;jenkins-hbase4:37773] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 18:16:49,854 DEBUG [RS:1;jenkins-hbase4:37773] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 18:16:49,855 DEBUG [RS:0;jenkins-hbase4:45471] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 18:16:49,855 DEBUG [RS:0;jenkins-hbase4:45471] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 18:16:49,856 INFO [RS:0;jenkins-hbase4:45471] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-11 18:16:49,854 INFO [RS:1;jenkins-hbase4:37773] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-11 18:16:49,856 INFO [RS:0;jenkins-hbase4:45471] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-11 18:16:49,858 INFO [RS:1;jenkins-hbase4:37773] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-11 18:16:49,865 DEBUG [jenkins-hbase4:45397] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-11 18:16:49,881 DEBUG [jenkins-hbase4:45397] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:16:49,884 DEBUG [jenkins-hbase4:45397] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:16:49,884 DEBUG [jenkins-hbase4:45397] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:16:49,884 DEBUG [jenkins-hbase4:45397] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:16:49,884 DEBUG [jenkins-hbase4:45397] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:16:49,888 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37773,1689099407650, state=OPENING 2023-07-11 18:16:49,896 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-11 18:16:49,899 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:16:49,900 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 18:16:49,904 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:16:49,963 INFO [RS:1;jenkins-hbase4:37773] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37773%2C1689099407650, suffix=, logDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/WALs/jenkins-hbase4.apache.org,37773,1689099407650, archiveDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/oldWALs, maxLogs=32 2023-07-11 18:16:49,965 INFO [RS:0;jenkins-hbase4:45471] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45471%2C1689099407428, suffix=, logDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/WALs/jenkins-hbase4.apache.org,45471,1689099407428, archiveDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/oldWALs, maxLogs=32 2023-07-11 18:16:49,963 INFO [RS:2;jenkins-hbase4:45821] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45821%2C1689099407865, suffix=, logDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/WALs/jenkins-hbase4.apache.org,45821,1689099407865, archiveDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/oldWALs, maxLogs=32 2023-07-11 18:16:49,990 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33363,DS-4c181186-22ac-44a8-b1a5-3c334dc774a7,DISK] 2023-07-11 18:16:49,990 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39467,DS-16d7a147-10c7-4220-b527-dbfb950941dd,DISK] 2023-07-11 18:16:49,991 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41511,DS-8dff4402-f4a4-4098-b391-d4e5069af3ae,DISK] 2023-07-11 18:16:49,994 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41511,DS-8dff4402-f4a4-4098-b391-d4e5069af3ae,DISK] 2023-07-11 18:16:49,994 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39467,DS-16d7a147-10c7-4220-b527-dbfb950941dd,DISK] 2023-07-11 18:16:49,995 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39467,DS-16d7a147-10c7-4220-b527-dbfb950941dd,DISK] 2023-07-11 18:16:49,995 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33363,DS-4c181186-22ac-44a8-b1a5-3c334dc774a7,DISK] 2023-07-11 18:16:50,005 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33363,DS-4c181186-22ac-44a8-b1a5-3c334dc774a7,DISK] 2023-07-11 18:16:50,007 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41511,DS-8dff4402-f4a4-4098-b391-d4e5069af3ae,DISK] 2023-07-11 18:16:50,014 INFO [RS:1;jenkins-hbase4:37773] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/WALs/jenkins-hbase4.apache.org,37773,1689099407650/jenkins-hbase4.apache.org%2C37773%2C1689099407650.1689099409969 2023-07-11 18:16:50,017 DEBUG [RS:1;jenkins-hbase4:37773] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39467,DS-16d7a147-10c7-4220-b527-dbfb950941dd,DISK], DatanodeInfoWithStorage[127.0.0.1:33363,DS-4c181186-22ac-44a8-b1a5-3c334dc774a7,DISK], DatanodeInfoWithStorage[127.0.0.1:41511,DS-8dff4402-f4a4-4098-b391-d4e5069af3ae,DISK]] 2023-07-11 18:16:50,017 INFO [RS:2;jenkins-hbase4:45821] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/WALs/jenkins-hbase4.apache.org,45821,1689099407865/jenkins-hbase4.apache.org%2C45821%2C1689099407865.1689099409968 2023-07-11 18:16:50,017 INFO [RS:0;jenkins-hbase4:45471] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/WALs/jenkins-hbase4.apache.org,45471,1689099407428/jenkins-hbase4.apache.org%2C45471%2C1689099407428.1689099409969 2023-07-11 18:16:50,018 DEBUG [RS:2;jenkins-hbase4:45821] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41511,DS-8dff4402-f4a4-4098-b391-d4e5069af3ae,DISK], DatanodeInfoWithStorage[127.0.0.1:39467,DS-16d7a147-10c7-4220-b527-dbfb950941dd,DISK], DatanodeInfoWithStorage[127.0.0.1:33363,DS-4c181186-22ac-44a8-b1a5-3c334dc774a7,DISK]] 2023-07-11 18:16:50,018 DEBUG [RS:0;jenkins-hbase4:45471] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39467,DS-16d7a147-10c7-4220-b527-dbfb950941dd,DISK], DatanodeInfoWithStorage[127.0.0.1:33363,DS-4c181186-22ac-44a8-b1a5-3c334dc774a7,DISK], DatanodeInfoWithStorage[127.0.0.1:41511,DS-8dff4402-f4a4-4098-b391-d4e5069af3ae,DISK]] 2023-07-11 18:16:50,068 WARN [ReadOnlyZKClient-127.0.0.1:58592@0x26cd00a3] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-11 18:16:50,084 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:50,087 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 18:16:50,091 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51798, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 18:16:50,103 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-11 18:16:50,103 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 18:16:50,106 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45397,1689099405546] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 18:16:50,114 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51802, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 18:16:50,115 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37773] ipc.CallRunner(144): callId: 1 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:51802 deadline: 1689099470115, exception=org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region hbase:meta,,1 is opening on jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:50,118 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37773%2C1689099407650.meta, suffix=.meta, logDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/WALs/jenkins-hbase4.apache.org,37773,1689099407650, archiveDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/oldWALs, maxLogs=32 2023-07-11 18:16:50,147 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41511,DS-8dff4402-f4a4-4098-b391-d4e5069af3ae,DISK] 2023-07-11 18:16:50,155 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33363,DS-4c181186-22ac-44a8-b1a5-3c334dc774a7,DISK] 2023-07-11 18:16:50,156 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39467,DS-16d7a147-10c7-4220-b527-dbfb950941dd,DISK] 2023-07-11 18:16:50,169 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/WALs/jenkins-hbase4.apache.org,37773,1689099407650/jenkins-hbase4.apache.org%2C37773%2C1689099407650.meta.1689099410120.meta 2023-07-11 18:16:50,170 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41511,DS-8dff4402-f4a4-4098-b391-d4e5069af3ae,DISK], DatanodeInfoWithStorage[127.0.0.1:39467,DS-16d7a147-10c7-4220-b527-dbfb950941dd,DISK], DatanodeInfoWithStorage[127.0.0.1:33363,DS-4c181186-22ac-44a8-b1a5-3c334dc774a7,DISK]] 2023-07-11 18:16:50,171 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:16:50,172 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-11 18:16:50,175 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-11 18:16:50,178 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-11 18:16:50,185 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-11 18:16:50,185 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:50,185 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-11 18:16:50,185 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-11 18:16:50,189 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-11 18:16:50,191 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/info 2023-07-11 18:16:50,191 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/info 2023-07-11 18:16:50,191 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-11 18:16:50,193 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:50,193 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-11 18:16:50,195 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/rep_barrier 2023-07-11 18:16:50,195 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/rep_barrier 2023-07-11 18:16:50,195 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-11 18:16:50,196 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:50,196 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-11 18:16:50,198 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/table 2023-07-11 18:16:50,198 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/table 2023-07-11 18:16:50,198 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-11 18:16:50,199 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:50,201 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740 2023-07-11 18:16:50,204 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740 2023-07-11 18:16:50,208 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-11 18:16:50,213 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-11 18:16:50,215 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9956328800, jitterRate=-0.07274462282657623}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-11 18:16:50,215 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-11 18:16:50,230 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689099410081 2023-07-11 18:16:50,262 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-11 18:16:50,264 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-11 18:16:50,265 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37773,1689099407650, state=OPEN 2023-07-11 18:16:50,268 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-11 18:16:50,268 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 18:16:50,274 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-11 18:16:50,274 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37773,1689099407650 in 364 msec 2023-07-11 18:16:50,281 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-11 18:16:50,281 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 583 msec 2023-07-11 18:16:50,287 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 994 msec 2023-07-11 18:16:50,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689099410288, completionTime=-1 2023-07-11 18:16:50,288 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-11 18:16:50,288 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-11 18:16:50,364 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-11 18:16:50,364 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689099470364 2023-07-11 18:16:50,364 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689099530364 2023-07-11 18:16:50,364 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 76 msec 2023-07-11 18:16:50,387 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45397,1689099405546-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:50,388 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45397,1689099405546-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:50,388 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45397,1689099405546-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:50,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:45397, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:50,390 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:50,398 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-11 18:16:50,409 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-11 18:16:50,410 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-11 18:16:50,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-11 18:16:50,425 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 18:16:50,428 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 18:16:50,447 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/hbase/namespace/ddeec04b60fd5a8c2d4719765d0b2735 2023-07-11 18:16:50,451 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/hbase/namespace/ddeec04b60fd5a8c2d4719765d0b2735 empty. 2023-07-11 18:16:50,452 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/hbase/namespace/ddeec04b60fd5a8c2d4719765d0b2735 2023-07-11 18:16:50,452 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-11 18:16:50,512 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-11 18:16:50,514 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => ddeec04b60fd5a8c2d4719765d0b2735, NAME => 'hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:16:50,538 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:50,538 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing ddeec04b60fd5a8c2d4719765d0b2735, disabling compactions & flushes 2023-07-11 18:16:50,539 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735. 2023-07-11 18:16:50,539 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735. 2023-07-11 18:16:50,539 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735. after waiting 0 ms 2023-07-11 18:16:50,539 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735. 2023-07-11 18:16:50,539 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735. 2023-07-11 18:16:50,539 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for ddeec04b60fd5a8c2d4719765d0b2735: 2023-07-11 18:16:50,543 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 18:16:50,563 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689099410547"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099410547"}]},"ts":"1689099410547"} 2023-07-11 18:16:50,605 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 18:16:50,607 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 18:16:50,614 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099410607"}]},"ts":"1689099410607"} 2023-07-11 18:16:50,620 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-11 18:16:50,625 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:16:50,625 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:16:50,625 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:16:50,625 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:16:50,625 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:16:50,628 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ddeec04b60fd5a8c2d4719765d0b2735, ASSIGN}] 2023-07-11 18:16:50,631 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=ddeec04b60fd5a8c2d4719765d0b2735, ASSIGN 2023-07-11 18:16:50,633 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=ddeec04b60fd5a8c2d4719765d0b2735, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45821,1689099407865; forceNewPlan=false, retain=false 2023-07-11 18:16:50,637 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45397,1689099405546] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 18:16:50,641 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45397,1689099405546] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-11 18:16:50,644 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 18:16:50,646 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 18:16:50,650 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:50,652 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514 empty. 2023-07-11 18:16:50,652 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:50,652 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-11 18:16:50,692 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-11 18:16:50,696 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => b41d0021b1b281d3ab8046d2e4311514, NAME => 'hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:16:50,727 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:50,727 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing b41d0021b1b281d3ab8046d2e4311514, disabling compactions & flushes 2023-07-11 18:16:50,727 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:16:50,727 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:16:50,727 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. after waiting 0 ms 2023-07-11 18:16:50,727 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:16:50,727 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:16:50,727 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for b41d0021b1b281d3ab8046d2e4311514: 2023-07-11 18:16:50,741 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 18:16:50,744 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689099410743"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099410743"}]},"ts":"1689099410743"} 2023-07-11 18:16:50,750 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 18:16:50,752 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 18:16:50,752 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099410752"}]},"ts":"1689099410752"} 2023-07-11 18:16:50,758 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-11 18:16:50,763 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:16:50,763 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:16:50,763 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:16:50,763 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:16:50,763 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:16:50,764 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=b41d0021b1b281d3ab8046d2e4311514, ASSIGN}] 2023-07-11 18:16:50,767 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=b41d0021b1b281d3ab8046d2e4311514, ASSIGN 2023-07-11 18:16:50,770 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=b41d0021b1b281d3ab8046d2e4311514, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37773,1689099407650; forceNewPlan=false, retain=false 2023-07-11 18:16:50,771 INFO [jenkins-hbase4:45397] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-11 18:16:50,773 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=ddeec04b60fd5a8c2d4719765d0b2735, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:50,773 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=b41d0021b1b281d3ab8046d2e4311514, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:50,774 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689099410773"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099410773"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099410773"}]},"ts":"1689099410773"} 2023-07-11 18:16:50,774 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689099410773"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099410773"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099410773"}]},"ts":"1689099410773"} 2023-07-11 18:16:50,781 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure ddeec04b60fd5a8c2d4719765d0b2735, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:16:50,785 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure b41d0021b1b281d3ab8046d2e4311514, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:16:50,939 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:50,939 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 18:16:50,943 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43054, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 18:16:50,956 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:16:50,959 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735. 2023-07-11 18:16:50,959 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b41d0021b1b281d3ab8046d2e4311514, NAME => 'hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:16:50,960 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ddeec04b60fd5a8c2d4719765d0b2735, NAME => 'hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:16:50,960 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-11 18:16:50,960 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. service=MultiRowMutationService 2023-07-11 18:16:50,961 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace ddeec04b60fd5a8c2d4719765d0b2735 2023-07-11 18:16:50,961 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-11 18:16:50,961 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:50,961 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:50,961 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ddeec04b60fd5a8c2d4719765d0b2735 2023-07-11 18:16:50,961 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:50,961 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ddeec04b60fd5a8c2d4719765d0b2735 2023-07-11 18:16:50,961 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:50,962 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:50,965 INFO [StoreOpener-ddeec04b60fd5a8c2d4719765d0b2735-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region ddeec04b60fd5a8c2d4719765d0b2735 2023-07-11 18:16:50,965 INFO [StoreOpener-b41d0021b1b281d3ab8046d2e4311514-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:50,970 DEBUG [StoreOpener-ddeec04b60fd5a8c2d4719765d0b2735-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/namespace/ddeec04b60fd5a8c2d4719765d0b2735/info 2023-07-11 18:16:50,970 DEBUG [StoreOpener-ddeec04b60fd5a8c2d4719765d0b2735-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/namespace/ddeec04b60fd5a8c2d4719765d0b2735/info 2023-07-11 18:16:50,970 DEBUG [StoreOpener-b41d0021b1b281d3ab8046d2e4311514-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/m 2023-07-11 18:16:50,970 DEBUG [StoreOpener-b41d0021b1b281d3ab8046d2e4311514-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/m 2023-07-11 18:16:50,971 INFO [StoreOpener-ddeec04b60fd5a8c2d4719765d0b2735-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ddeec04b60fd5a8c2d4719765d0b2735 columnFamilyName info 2023-07-11 18:16:50,971 INFO [StoreOpener-b41d0021b1b281d3ab8046d2e4311514-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b41d0021b1b281d3ab8046d2e4311514 columnFamilyName m 2023-07-11 18:16:50,972 INFO [StoreOpener-ddeec04b60fd5a8c2d4719765d0b2735-1] regionserver.HStore(310): Store=ddeec04b60fd5a8c2d4719765d0b2735/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:50,973 INFO [StoreOpener-b41d0021b1b281d3ab8046d2e4311514-1] regionserver.HStore(310): Store=b41d0021b1b281d3ab8046d2e4311514/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:50,976 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/namespace/ddeec04b60fd5a8c2d4719765d0b2735 2023-07-11 18:16:50,978 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:50,979 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:50,979 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/namespace/ddeec04b60fd5a8c2d4719765d0b2735 2023-07-11 18:16:50,985 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:50,985 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ddeec04b60fd5a8c2d4719765d0b2735 2023-07-11 18:16:50,992 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:16:50,993 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/namespace/ddeec04b60fd5a8c2d4719765d0b2735/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:16:50,993 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b41d0021b1b281d3ab8046d2e4311514; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@93cdf4c, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:16:50,993 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b41d0021b1b281d3ab8046d2e4311514: 2023-07-11 18:16:50,994 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ddeec04b60fd5a8c2d4719765d0b2735; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11832846240, jitterRate=0.10201968252658844}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:16:50,994 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ddeec04b60fd5a8c2d4719765d0b2735: 2023-07-11 18:16:50,998 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735., pid=8, masterSystemTime=1689099410939 2023-07-11 18:16:50,998 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514., pid=9, masterSystemTime=1689099410939 2023-07-11 18:16:51,004 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:16:51,005 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:16:51,006 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735. 2023-07-11 18:16:51,006 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735. 2023-07-11 18:16:51,006 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=b41d0021b1b281d3ab8046d2e4311514, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:51,007 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689099411005"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099411005"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099411005"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099411005"}]},"ts":"1689099411005"} 2023-07-11 18:16:51,007 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=ddeec04b60fd5a8c2d4719765d0b2735, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:51,008 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689099411007"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099411007"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099411007"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099411007"}]},"ts":"1689099411007"} 2023-07-11 18:16:51,023 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-11 18:16:51,025 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure b41d0021b1b281d3ab8046d2e4311514, server=jenkins-hbase4.apache.org,37773,1689099407650 in 225 msec 2023-07-11 18:16:51,026 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-11 18:16:51,027 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure ddeec04b60fd5a8c2d4719765d0b2735, server=jenkins-hbase4.apache.org,45821,1689099407865 in 232 msec 2023-07-11 18:16:51,031 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-11 18:16:51,031 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=b41d0021b1b281d3ab8046d2e4311514, ASSIGN in 259 msec 2023-07-11 18:16:51,032 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-11 18:16:51,032 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 18:16:51,033 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=ddeec04b60fd5a8c2d4719765d0b2735, ASSIGN in 399 msec 2023-07-11 18:16:51,033 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099411033"}]},"ts":"1689099411033"} 2023-07-11 18:16:51,034 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 18:16:51,036 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099411035"}]},"ts":"1689099411035"} 2023-07-11 18:16:51,036 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-11 18:16:51,043 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-11 18:16:51,043 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 18:16:51,047 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 407 msec 2023-07-11 18:16:51,047 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 18:16:51,050 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 636 msec 2023-07-11 18:16:51,125 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-11 18:16:51,126 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-11 18:16:51,127 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:16:51,155 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 18:16:51,157 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43062, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 18:16:51,166 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45397,1689099405546] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-11 18:16:51,167 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45397,1689099405546] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-11 18:16:51,179 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-11 18:16:51,214 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 18:16:51,228 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 57 msec 2023-07-11 18:16:51,237 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-11 18:16:51,253 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 18:16:51,259 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 22 msec 2023-07-11 18:16:51,265 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:16:51,265 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45397,1689099405546] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:51,269 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45397,1689099405546] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-11 18:16:51,286 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-11 18:16:51,288 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45397,1689099405546] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-11 18:16:51,291 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-11 18:16:51,292 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.176sec 2023-07-11 18:16:51,296 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-11 18:16:51,297 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-11 18:16:51,297 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-11 18:16:51,302 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45397,1689099405546-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-11 18:16:51,303 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45397,1689099405546-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-11 18:16:51,316 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-11 18:16:51,344 DEBUG [Listener at localhost/35107] zookeeper.ReadOnlyZKClient(139): Connect 0x326b7986 to 127.0.0.1:58592 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:16:51,350 DEBUG [Listener at localhost/35107] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2b3de72, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:16:51,371 DEBUG [hconnection-0x7b3db8b3-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 18:16:51,388 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46112, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 18:16:51,401 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,45397,1689099405546 2023-07-11 18:16:51,403 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:16:51,422 DEBUG [Listener at localhost/35107] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-11 18:16:51,430 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57346, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-11 18:16:51,450 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-11 18:16:51,450 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:16:51,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-11 18:16:51,458 DEBUG [Listener at localhost/35107] zookeeper.ReadOnlyZKClient(139): Connect 0x30296f6d to 127.0.0.1:58592 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:16:51,465 DEBUG [Listener at localhost/35107] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@120ad869, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:16:51,465 INFO [Listener at localhost/35107] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:58592 2023-07-11 18:16:51,469 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 18:16:51,471 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101559a084d000a connected 2023-07-11 18:16:51,512 INFO [Listener at localhost/35107] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=422, OpenFileDescriptor=679, MaxFileDescriptor=60000, SystemLoadAverage=624, ProcessCount=172, AvailableMemoryMB=3098 2023-07-11 18:16:51,515 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-11 18:16:51,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:51,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:51,600 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-11 18:16:51,612 INFO [Listener at localhost/35107] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-11 18:16:51,612 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:16:51,612 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 18:16:51,613 INFO [Listener at localhost/35107] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 18:16:51,613 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:16:51,613 INFO [Listener at localhost/35107] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 18:16:51,613 INFO [Listener at localhost/35107] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 18:16:51,616 INFO [Listener at localhost/35107] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37889 2023-07-11 18:16:51,617 INFO [Listener at localhost/35107] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 18:16:51,618 DEBUG [Listener at localhost/35107] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 18:16:51,619 INFO [Listener at localhost/35107] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:16:51,622 INFO [Listener at localhost/35107] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:16:51,626 INFO [Listener at localhost/35107] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37889 connecting to ZooKeeper ensemble=127.0.0.1:58592 2023-07-11 18:16:51,630 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:378890x0, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 18:16:51,631 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37889-0x101559a084d000b connected 2023-07-11 18:16:51,631 DEBUG [Listener at localhost/35107] zookeeper.ZKUtil(162): regionserver:37889-0x101559a084d000b, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-11 18:16:51,632 DEBUG [Listener at localhost/35107] zookeeper.ZKUtil(162): regionserver:37889-0x101559a084d000b, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-11 18:16:51,633 DEBUG [Listener at localhost/35107] zookeeper.ZKUtil(164): regionserver:37889-0x101559a084d000b, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 18:16:51,634 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37889 2023-07-11 18:16:51,634 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37889 2023-07-11 18:16:51,635 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37889 2023-07-11 18:16:51,638 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37889 2023-07-11 18:16:51,639 DEBUG [Listener at localhost/35107] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37889 2023-07-11 18:16:51,641 INFO [Listener at localhost/35107] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 18:16:51,641 INFO [Listener at localhost/35107] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 18:16:51,641 INFO [Listener at localhost/35107] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 18:16:51,642 INFO [Listener at localhost/35107] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 18:16:51,642 INFO [Listener at localhost/35107] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 18:16:51,642 INFO [Listener at localhost/35107] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 18:16:51,642 INFO [Listener at localhost/35107] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 18:16:51,643 INFO [Listener at localhost/35107] http.HttpServer(1146): Jetty bound to port 40607 2023-07-11 18:16:51,643 INFO [Listener at localhost/35107] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 18:16:51,651 INFO [Listener at localhost/35107] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:16:51,651 INFO [Listener at localhost/35107] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@480cfad4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/hadoop.log.dir/,AVAILABLE} 2023-07-11 18:16:51,651 INFO [Listener at localhost/35107] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:16:51,652 INFO [Listener at localhost/35107] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@593950e0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 18:16:51,782 INFO [Listener at localhost/35107] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 18:16:51,783 INFO [Listener at localhost/35107] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 18:16:51,783 INFO [Listener at localhost/35107] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 18:16:51,784 INFO [Listener at localhost/35107] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-11 18:16:51,785 INFO [Listener at localhost/35107] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:16:51,786 INFO [Listener at localhost/35107] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@a749b0e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/java.io.tmpdir/jetty-0_0_0_0-40607-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8611740995931468820/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:16:51,788 INFO [Listener at localhost/35107] server.AbstractConnector(333): Started ServerConnector@2fbb41cf{HTTP/1.1, (http/1.1)}{0.0.0.0:40607} 2023-07-11 18:16:51,788 INFO [Listener at localhost/35107] server.Server(415): Started @12138ms 2023-07-11 18:16:51,793 INFO [RS:3;jenkins-hbase4:37889] regionserver.HRegionServer(951): ClusterId : dd54002b-0292-4dbd-8286-dd3ac8efa65a 2023-07-11 18:16:51,794 DEBUG [RS:3;jenkins-hbase4:37889] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 18:16:51,797 DEBUG [RS:3;jenkins-hbase4:37889] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 18:16:51,797 DEBUG [RS:3;jenkins-hbase4:37889] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 18:16:51,799 DEBUG [RS:3;jenkins-hbase4:37889] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 18:16:51,801 DEBUG [RS:3;jenkins-hbase4:37889] zookeeper.ReadOnlyZKClient(139): Connect 0x7bf55d29 to 127.0.0.1:58592 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:16:51,818 DEBUG [RS:3;jenkins-hbase4:37889] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7532ac27, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:16:51,818 DEBUG [RS:3;jenkins-hbase4:37889] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@710d3659, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-11 18:16:51,831 DEBUG [RS:3;jenkins-hbase4:37889] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:37889 2023-07-11 18:16:51,832 INFO [RS:3;jenkins-hbase4:37889] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 18:16:51,832 INFO [RS:3;jenkins-hbase4:37889] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 18:16:51,832 DEBUG [RS:3;jenkins-hbase4:37889] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 18:16:51,833 INFO [RS:3;jenkins-hbase4:37889] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45397,1689099405546 with isa=jenkins-hbase4.apache.org/172.31.14.131:37889, startcode=1689099411612 2023-07-11 18:16:51,834 DEBUG [RS:3;jenkins-hbase4:37889] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 18:16:51,844 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54247, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 18:16:51,844 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45397] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:51,844 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45397,1689099405546] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 18:16:51,845 DEBUG [RS:3;jenkins-hbase4:37889] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582 2023-07-11 18:16:51,845 DEBUG [RS:3;jenkins-hbase4:37889] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40365 2023-07-11 18:16:51,845 DEBUG [RS:3;jenkins-hbase4:37889] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44785 2023-07-11 18:16:51,851 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:16:51,851 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:16:51,851 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:16:51,851 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:16:51,853 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37889,1689099411612] 2023-07-11 18:16:51,853 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:51,853 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:51,853 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:51,854 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:51,854 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:51,854 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:51,854 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:51,854 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:51,855 DEBUG [RS:3;jenkins-hbase4:37889] zookeeper.ZKUtil(162): regionserver:37889-0x101559a084d000b, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:51,855 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:51,855 WARN [RS:3;jenkins-hbase4:37889] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 18:16:51,855 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:51,855 INFO [RS:3;jenkins-hbase4:37889] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 18:16:51,859 DEBUG [RS:3;jenkins-hbase4:37889] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/WALs/jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:51,859 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:51,860 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:51,865 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45397,1689099405546] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:51,866 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45397,1689099405546] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-11 18:16:51,874 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45397,1689099405546] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-11 18:16:51,875 DEBUG [RS:3;jenkins-hbase4:37889] zookeeper.ZKUtil(162): regionserver:37889-0x101559a084d000b, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:51,876 DEBUG [RS:3;jenkins-hbase4:37889] zookeeper.ZKUtil(162): regionserver:37889-0x101559a084d000b, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:51,876 DEBUG [RS:3;jenkins-hbase4:37889] zookeeper.ZKUtil(162): regionserver:37889-0x101559a084d000b, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:51,877 DEBUG [RS:3;jenkins-hbase4:37889] zookeeper.ZKUtil(162): regionserver:37889-0x101559a084d000b, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:51,878 DEBUG [RS:3;jenkins-hbase4:37889] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 18:16:51,878 INFO [RS:3;jenkins-hbase4:37889] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 18:16:51,882 INFO [RS:3;jenkins-hbase4:37889] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 18:16:51,883 INFO [RS:3;jenkins-hbase4:37889] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 18:16:51,883 INFO [RS:3;jenkins-hbase4:37889] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:51,883 INFO [RS:3;jenkins-hbase4:37889] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 18:16:51,885 INFO [RS:3;jenkins-hbase4:37889] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:51,886 DEBUG [RS:3;jenkins-hbase4:37889] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:51,886 DEBUG [RS:3;jenkins-hbase4:37889] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:51,886 DEBUG [RS:3;jenkins-hbase4:37889] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:51,886 DEBUG [RS:3;jenkins-hbase4:37889] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:51,886 DEBUG [RS:3;jenkins-hbase4:37889] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:51,886 DEBUG [RS:3;jenkins-hbase4:37889] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-11 18:16:51,886 DEBUG [RS:3;jenkins-hbase4:37889] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:51,886 DEBUG [RS:3;jenkins-hbase4:37889] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:51,886 DEBUG [RS:3;jenkins-hbase4:37889] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:51,887 DEBUG [RS:3;jenkins-hbase4:37889] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:16:51,890 INFO [RS:3;jenkins-hbase4:37889] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:51,891 INFO [RS:3;jenkins-hbase4:37889] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:51,891 INFO [RS:3;jenkins-hbase4:37889] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:51,903 INFO [RS:3;jenkins-hbase4:37889] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 18:16:51,903 INFO [RS:3;jenkins-hbase4:37889] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37889,1689099411612-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:16:51,916 INFO [RS:3;jenkins-hbase4:37889] regionserver.Replication(203): jenkins-hbase4.apache.org,37889,1689099411612 started 2023-07-11 18:16:51,916 INFO [RS:3;jenkins-hbase4:37889] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37889,1689099411612, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37889, sessionid=0x101559a084d000b 2023-07-11 18:16:51,916 DEBUG [RS:3;jenkins-hbase4:37889] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 18:16:51,916 DEBUG [RS:3;jenkins-hbase4:37889] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:51,916 DEBUG [RS:3;jenkins-hbase4:37889] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37889,1689099411612' 2023-07-11 18:16:51,917 DEBUG [RS:3;jenkins-hbase4:37889] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 18:16:51,917 DEBUG [RS:3;jenkins-hbase4:37889] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 18:16:51,918 DEBUG [RS:3;jenkins-hbase4:37889] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 18:16:51,918 DEBUG [RS:3;jenkins-hbase4:37889] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 18:16:51,918 DEBUG [RS:3;jenkins-hbase4:37889] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:51,918 DEBUG [RS:3;jenkins-hbase4:37889] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37889,1689099411612' 2023-07-11 18:16:51,918 DEBUG [RS:3;jenkins-hbase4:37889] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 18:16:51,918 DEBUG [RS:3;jenkins-hbase4:37889] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 18:16:51,919 DEBUG [RS:3;jenkins-hbase4:37889] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 18:16:51,919 INFO [RS:3;jenkins-hbase4:37889] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-11 18:16:51,919 INFO [RS:3;jenkins-hbase4:37889] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-11 18:16:51,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:16:51,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:51,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:16:51,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:16:51,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:16:51,938 DEBUG [hconnection-0x51dbb1fe-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 18:16:51,943 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46124, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 18:16:51,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:51,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:51,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45397] to rsgroup master 2023-07-11 18:16:51,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:16:51,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:57346 deadline: 1689100611976, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. 2023-07-11 18:16:51,979 WARN [Listener at localhost/35107] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:16:51,981 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:16:51,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:51,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:51,984 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37773, jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:45821], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:16:51,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:16:51,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:16:51,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:16:51,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:16:52,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:52,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:52,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:16:52,006 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:52,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:16:52,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:16:52,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:52,018 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:52,022 INFO [RS:3;jenkins-hbase4:37889] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37889%2C1689099411612, suffix=, logDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/WALs/jenkins-hbase4.apache.org,37889,1689099411612, archiveDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/oldWALs, maxLogs=32 2023-07-11 18:16:52,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:37773] to rsgroup Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:52,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:52,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:16:52,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:52,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:16:52,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(238): Moving server region b41d0021b1b281d3ab8046d2e4311514, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:52,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=b41d0021b1b281d3ab8046d2e4311514, REOPEN/MOVE 2023-07-11 18:16:52,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:52,055 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=b41d0021b1b281d3ab8046d2e4311514, REOPEN/MOVE 2023-07-11 18:16:52,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-11 18:16:52,067 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=b41d0021b1b281d3ab8046d2e4311514, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:52,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-11 18:16:52,067 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689099412067"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099412067"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099412067"}]},"ts":"1689099412067"} 2023-07-11 18:16:52,067 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33363,DS-4c181186-22ac-44a8-b1a5-3c334dc774a7,DISK] 2023-07-11 18:16:52,067 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39467,DS-16d7a147-10c7-4220-b527-dbfb950941dd,DISK] 2023-07-11 18:16:52,064 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41511,DS-8dff4402-f4a4-4098-b391-d4e5069af3ae,DISK] 2023-07-11 18:16:52,068 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-11 18:16:52,072 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37773,1689099407650, state=CLOSING 2023-07-11 18:16:52,072 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure b41d0021b1b281d3ab8046d2e4311514, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:16:52,075 INFO [RS:3;jenkins-hbase4:37889] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/WALs/jenkins-hbase4.apache.org,37889,1689099411612/jenkins-hbase4.apache.org%2C37889%2C1689099411612.1689099412024 2023-07-11 18:16:52,078 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-11 18:16:52,078 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 18:16:52,078 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:16:52,085 DEBUG [RS:3;jenkins-hbase4:37889] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41511,DS-8dff4402-f4a4-4098-b391-d4e5069af3ae,DISK], DatanodeInfoWithStorage[127.0.0.1:33363,DS-4c181186-22ac-44a8-b1a5-3c334dc774a7,DISK], DatanodeInfoWithStorage[127.0.0.1:39467,DS-16d7a147-10c7-4220-b527-dbfb950941dd,DISK]] 2023-07-11 18:16:52,085 DEBUG [PEWorker-1] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure b41d0021b1b281d3ab8046d2e4311514, server=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:52,243 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-11 18:16:52,245 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-11 18:16:52,245 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-11 18:16:52,245 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-11 18:16:52,245 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-11 18:16:52,245 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-11 18:16:52,246 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.85 KB heapSize=5.58 KB 2023-07-11 18:16:52,339 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.67 KB at sequenceid=15 (bloomFilter=false), to=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/.tmp/info/8daf5d36959849338dfd08a93877c482 2023-07-11 18:16:52,444 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=15 (bloomFilter=false), to=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/.tmp/table/d80457fa51ad47e7a3cc1d97e990f7ee 2023-07-11 18:16:52,472 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/.tmp/info/8daf5d36959849338dfd08a93877c482 as hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/info/8daf5d36959849338dfd08a93877c482 2023-07-11 18:16:52,486 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/info/8daf5d36959849338dfd08a93877c482, entries=21, sequenceid=15, filesize=7.1 K 2023-07-11 18:16:52,490 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/.tmp/table/d80457fa51ad47e7a3cc1d97e990f7ee as hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/table/d80457fa51ad47e7a3cc1d97e990f7ee 2023-07-11 18:16:52,500 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/table/d80457fa51ad47e7a3cc1d97e990f7ee, entries=4, sequenceid=15, filesize=4.8 K 2023-07-11 18:16:52,503 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.85 KB/2916, heapSize ~5.30 KB/5424, currentSize=0 B/0 for 1588230740 in 257ms, sequenceid=15, compaction requested=false 2023-07-11 18:16:52,505 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-11 18:16:52,525 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/recovered.edits/18.seqid, newMaxSeqId=18, maxSeqId=1 2023-07-11 18:16:52,527 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 18:16:52,527 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-11 18:16:52,527 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-11 18:16:52,527 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,45471,1689099407428 record at close sequenceid=15 2023-07-11 18:16:52,530 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-11 18:16:52,534 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-11 18:16:52,537 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-11 18:16:52,537 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37773,1689099407650 in 456 msec 2023-07-11 18:16:52,538 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45471,1689099407428; forceNewPlan=false, retain=false 2023-07-11 18:16:52,689 INFO [jenkins-hbase4:45397] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 18:16:52,689 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45471,1689099407428, state=OPENING 2023-07-11 18:16:52,692 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-11 18:16:52,692 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 18:16:52,692 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:16:52,847 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:52,847 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 18:16:52,852 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53262, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 18:16:52,859 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-11 18:16:52,859 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 18:16:52,874 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45471%2C1689099407428.meta, suffix=.meta, logDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/WALs/jenkins-hbase4.apache.org,45471,1689099407428, archiveDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/oldWALs, maxLogs=32 2023-07-11 18:16:52,908 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33363,DS-4c181186-22ac-44a8-b1a5-3c334dc774a7,DISK] 2023-07-11 18:16:52,908 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41511,DS-8dff4402-f4a4-4098-b391-d4e5069af3ae,DISK] 2023-07-11 18:16:52,911 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39467,DS-16d7a147-10c7-4220-b527-dbfb950941dd,DISK] 2023-07-11 18:16:52,924 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/WALs/jenkins-hbase4.apache.org,45471,1689099407428/jenkins-hbase4.apache.org%2C45471%2C1689099407428.meta.1689099412876.meta 2023-07-11 18:16:52,924 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33363,DS-4c181186-22ac-44a8-b1a5-3c334dc774a7,DISK], DatanodeInfoWithStorage[127.0.0.1:41511,DS-8dff4402-f4a4-4098-b391-d4e5069af3ae,DISK], DatanodeInfoWithStorage[127.0.0.1:39467,DS-16d7a147-10c7-4220-b527-dbfb950941dd,DISK]] 2023-07-11 18:16:52,924 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:16:52,925 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-11 18:16:52,925 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-11 18:16:52,925 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-11 18:16:52,925 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-11 18:16:52,925 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:52,925 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-11 18:16:52,925 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-11 18:16:52,928 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-11 18:16:52,930 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/info 2023-07-11 18:16:52,930 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/info 2023-07-11 18:16:52,932 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-11 18:16:52,947 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/info/8daf5d36959849338dfd08a93877c482 2023-07-11 18:16:52,947 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:52,948 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-11 18:16:52,950 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/rep_barrier 2023-07-11 18:16:52,950 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/rep_barrier 2023-07-11 18:16:52,950 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-11 18:16:52,951 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:52,951 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-11 18:16:52,953 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/table 2023-07-11 18:16:52,953 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/table 2023-07-11 18:16:52,954 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-11 18:16:52,983 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/table/d80457fa51ad47e7a3cc1d97e990f7ee 2023-07-11 18:16:52,984 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:52,985 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740 2023-07-11 18:16:52,989 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740 2023-07-11 18:16:52,993 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-11 18:16:52,995 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-11 18:16:52,997 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=19; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11714144320, jitterRate=0.09096470475196838}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-11 18:16:52,997 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-11 18:16:52,999 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=16, masterSystemTime=1689099412847 2023-07-11 18:16:53,006 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-11 18:16:53,007 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-11 18:16:53,007 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45471,1689099407428, state=OPEN 2023-07-11 18:16:53,009 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-11 18:16:53,009 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 18:16:53,015 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-11 18:16:53,015 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45471,1689099407428 in 317 msec 2023-07-11 18:16:53,018 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 962 msec 2023-07-11 18:16:53,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-11 18:16:53,165 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:53,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b41d0021b1b281d3ab8046d2e4311514, disabling compactions & flushes 2023-07-11 18:16:53,166 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:16:53,166 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:16:53,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. after waiting 0 ms 2023-07-11 18:16:53,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:16:53,167 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing b41d0021b1b281d3ab8046d2e4311514 1/1 column families, dataSize=1.38 KB heapSize=2.36 KB 2023-07-11 18:16:53,224 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/.tmp/m/fbb05319683b4455b35806517df25ec8 2023-07-11 18:16:53,245 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/.tmp/m/fbb05319683b4455b35806517df25ec8 as hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/m/fbb05319683b4455b35806517df25ec8 2023-07-11 18:16:53,255 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/m/fbb05319683b4455b35806517df25ec8, entries=3, sequenceid=9, filesize=5.2 K 2023-07-11 18:16:53,259 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1414, heapSize ~2.34 KB/2400, currentSize=0 B/0 for b41d0021b1b281d3ab8046d2e4311514 in 92ms, sequenceid=9, compaction requested=false 2023-07-11 18:16:53,260 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-11 18:16:53,276 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-11 18:16:53,277 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 18:16:53,278 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:16:53,278 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b41d0021b1b281d3ab8046d2e4311514: 2023-07-11 18:16:53,278 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b41d0021b1b281d3ab8046d2e4311514 move to jenkins-hbase4.apache.org,45471,1689099407428 record at close sequenceid=9 2023-07-11 18:16:53,281 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:53,282 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=b41d0021b1b281d3ab8046d2e4311514, regionState=CLOSED 2023-07-11 18:16:53,282 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689099413282"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099413282"}]},"ts":"1689099413282"} 2023-07-11 18:16:53,286 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37773] ipc.CallRunner(144): callId: 41 service: ClientService methodName: Mutate size: 213 connection: 172.31.14.131:51802 deadline: 1689099473283, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45471 startCode=1689099407428. As of locationSeqNum=15. 2023-07-11 18:16:53,387 DEBUG [PEWorker-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 18:16:53,391 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53268, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 18:16:53,399 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-11 18:16:53,399 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; CloseRegionProcedure b41d0021b1b281d3ab8046d2e4311514, server=jenkins-hbase4.apache.org,37773,1689099407650 in 1.3210 sec 2023-07-11 18:16:53,400 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=b41d0021b1b281d3ab8046d2e4311514, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45471,1689099407428; forceNewPlan=false, retain=false 2023-07-11 18:16:53,551 INFO [jenkins-hbase4:45397] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 18:16:53,551 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=b41d0021b1b281d3ab8046d2e4311514, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:53,552 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689099413551"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099413551"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099413551"}]},"ts":"1689099413551"} 2023-07-11 18:16:53,555 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=12, state=RUNNABLE; OpenRegionProcedure b41d0021b1b281d3ab8046d2e4311514, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:16:53,714 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:16:53,714 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b41d0021b1b281d3ab8046d2e4311514, NAME => 'hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:16:53,715 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-11 18:16:53,715 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. service=MultiRowMutationService 2023-07-11 18:16:53,715 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-11 18:16:53,715 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:53,715 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:53,715 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:53,715 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:53,717 INFO [StoreOpener-b41d0021b1b281d3ab8046d2e4311514-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:53,719 DEBUG [StoreOpener-b41d0021b1b281d3ab8046d2e4311514-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/m 2023-07-11 18:16:53,719 DEBUG [StoreOpener-b41d0021b1b281d3ab8046d2e4311514-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/m 2023-07-11 18:16:53,720 INFO [StoreOpener-b41d0021b1b281d3ab8046d2e4311514-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b41d0021b1b281d3ab8046d2e4311514 columnFamilyName m 2023-07-11 18:16:53,730 DEBUG [StoreOpener-b41d0021b1b281d3ab8046d2e4311514-1] regionserver.HStore(539): loaded hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/m/fbb05319683b4455b35806517df25ec8 2023-07-11 18:16:53,730 INFO [StoreOpener-b41d0021b1b281d3ab8046d2e4311514-1] regionserver.HStore(310): Store=b41d0021b1b281d3ab8046d2e4311514/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:53,732 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:53,735 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:53,740 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:53,742 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b41d0021b1b281d3ab8046d2e4311514; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@4f4fc2b7, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:16:53,742 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b41d0021b1b281d3ab8046d2e4311514: 2023-07-11 18:16:53,747 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514., pid=17, masterSystemTime=1689099413708 2023-07-11 18:16:53,753 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:16:53,753 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:16:53,755 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=b41d0021b1b281d3ab8046d2e4311514, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:53,755 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689099413754"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099413754"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099413754"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099413754"}]},"ts":"1689099413754"} 2023-07-11 18:16:53,762 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=12 2023-07-11 18:16:53,762 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; OpenRegionProcedure b41d0021b1b281d3ab8046d2e4311514, server=jenkins-hbase4.apache.org,45471,1689099407428 in 203 msec 2023-07-11 18:16:53,765 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=b41d0021b1b281d3ab8046d2e4311514, REOPEN/MOVE in 1.7150 sec 2023-07-11 18:16:54,068 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37773,1689099407650, jenkins-hbase4.apache.org,37889,1689099411612] are moved back to default 2023-07-11 18:16:54,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:54,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:16:54,072 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37773] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:46124 deadline: 1689099474072, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45471 startCode=1689099407428. As of locationSeqNum=9. 2023-07-11 18:16:54,179 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37773] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:46124 deadline: 1689099474179, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45471 startCode=1689099407428. As of locationSeqNum=15. 2023-07-11 18:16:54,281 DEBUG [hconnection-0x51dbb1fe-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 18:16:54,284 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53276, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 18:16:54,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:54,306 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:54,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:54,310 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:16:54,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 18:16:54,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-11 18:16:54,324 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 18:16:54,326 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37773] ipc.CallRunner(144): callId: 46 service: ClientService methodName: ExecService size: 619 connection: 172.31.14.131:51802 deadline: 1689099474326, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45471 startCode=1689099407428. As of locationSeqNum=9. 2023-07-11 18:16:54,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 18 2023-07-11 18:16:54,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-11 18:16:54,434 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:54,435 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:16:54,435 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:54,436 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:16:54,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-11 18:16:54,444 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 18:16:54,451 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:54,451 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:54,451 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:54,451 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:54,452 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:54,452 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996 empty. 2023-07-11 18:16:54,452 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85 empty. 2023-07-11 18:16:54,453 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec empty. 2023-07-11 18:16:54,453 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2 empty. 2023-07-11 18:16:54,453 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:54,453 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:54,453 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:54,453 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8 empty. 2023-07-11 18:16:54,453 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:54,456 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:54,456 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-11 18:16:54,488 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-11 18:16:54,490 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 04f6129d49b9dd150b4925489b877f85, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:16:54,490 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 32a6515c1e6bd1907cd77c5a5126ceec, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:16:54,490 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 8ec95adb2f471e095141858fd6142996, NAME => 'Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:16:54,539 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:54,539 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 8ec95adb2f471e095141858fd6142996, disabling compactions & flushes 2023-07-11 18:16:54,539 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. 2023-07-11 18:16:54,539 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. 2023-07-11 18:16:54,539 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. after waiting 0 ms 2023-07-11 18:16:54,539 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. 2023-07-11 18:16:54,539 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. 2023-07-11 18:16:54,539 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 8ec95adb2f471e095141858fd6142996: 2023-07-11 18:16:54,540 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:54,541 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 32a6515c1e6bd1907cd77c5a5126ceec, disabling compactions & flushes 2023-07-11 18:16:54,541 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. 2023-07-11 18:16:54,541 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. 2023-07-11 18:16:54,541 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. after waiting 0 ms 2023-07-11 18:16:54,541 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. 2023-07-11 18:16:54,541 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. 2023-07-11 18:16:54,541 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 32a6515c1e6bd1907cd77c5a5126ceec: 2023-07-11 18:16:54,542 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 61d04f50bda08b77d69b4e57e8f96fd8, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:16:54,542 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:54,540 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 026d87329b3162df89bd14e7d23514f2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:16:54,542 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 04f6129d49b9dd150b4925489b877f85, disabling compactions & flushes 2023-07-11 18:16:54,548 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. 2023-07-11 18:16:54,548 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. 2023-07-11 18:16:54,548 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. after waiting 0 ms 2023-07-11 18:16:54,548 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. 2023-07-11 18:16:54,548 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. 2023-07-11 18:16:54,548 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 04f6129d49b9dd150b4925489b877f85: 2023-07-11 18:16:54,591 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:54,592 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:54,592 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 61d04f50bda08b77d69b4e57e8f96fd8, disabling compactions & flushes 2023-07-11 18:16:54,592 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 026d87329b3162df89bd14e7d23514f2, disabling compactions & flushes 2023-07-11 18:16:54,592 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. 2023-07-11 18:16:54,592 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. 2023-07-11 18:16:54,592 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. 2023-07-11 18:16:54,592 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. 2023-07-11 18:16:54,593 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. after waiting 0 ms 2023-07-11 18:16:54,593 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. after waiting 0 ms 2023-07-11 18:16:54,593 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. 2023-07-11 18:16:54,593 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. 2023-07-11 18:16:54,593 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. 2023-07-11 18:16:54,593 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. 2023-07-11 18:16:54,593 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 61d04f50bda08b77d69b4e57e8f96fd8: 2023-07-11 18:16:54,593 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 026d87329b3162df89bd14e7d23514f2: 2023-07-11 18:16:54,597 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 18:16:54,598 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099414598"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099414598"}]},"ts":"1689099414598"} 2023-07-11 18:16:54,599 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099414598"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099414598"}]},"ts":"1689099414598"} 2023-07-11 18:16:54,599 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099414598"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099414598"}]},"ts":"1689099414598"} 2023-07-11 18:16:54,599 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099414598"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099414598"}]},"ts":"1689099414598"} 2023-07-11 18:16:54,599 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099414598"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099414598"}]},"ts":"1689099414598"} 2023-07-11 18:16:54,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-11 18:16:54,656 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-11 18:16:54,658 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 18:16:54,658 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099414658"}]},"ts":"1689099414658"} 2023-07-11 18:16:54,660 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-11 18:16:54,665 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:16:54,665 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:16:54,665 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:16:54,665 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:16:54,666 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8ec95adb2f471e095141858fd6142996, ASSIGN}, {pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=32a6515c1e6bd1907cd77c5a5126ceec, ASSIGN}, {pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=04f6129d49b9dd150b4925489b877f85, ASSIGN}, {pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=026d87329b3162df89bd14e7d23514f2, ASSIGN}, {pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d04f50bda08b77d69b4e57e8f96fd8, ASSIGN}] 2023-07-11 18:16:54,669 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=32a6515c1e6bd1907cd77c5a5126ceec, ASSIGN 2023-07-11 18:16:54,669 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8ec95adb2f471e095141858fd6142996, ASSIGN 2023-07-11 18:16:54,670 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=04f6129d49b9dd150b4925489b877f85, ASSIGN 2023-07-11 18:16:54,670 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=026d87329b3162df89bd14e7d23514f2, ASSIGN 2023-07-11 18:16:54,672 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=04f6129d49b9dd150b4925489b877f85, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45471,1689099407428; forceNewPlan=false, retain=false 2023-07-11 18:16:54,672 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8ec95adb2f471e095141858fd6142996, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45471,1689099407428; forceNewPlan=false, retain=false 2023-07-11 18:16:54,672 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=32a6515c1e6bd1907cd77c5a5126ceec, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45821,1689099407865; forceNewPlan=false, retain=false 2023-07-11 18:16:54,672 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d04f50bda08b77d69b4e57e8f96fd8, ASSIGN 2023-07-11 18:16:54,672 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=026d87329b3162df89bd14e7d23514f2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45471,1689099407428; forceNewPlan=false, retain=false 2023-07-11 18:16:54,674 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d04f50bda08b77d69b4e57e8f96fd8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45821,1689099407865; forceNewPlan=false, retain=false 2023-07-11 18:16:54,822 INFO [jenkins-hbase4:45397] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-11 18:16:54,826 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=32a6515c1e6bd1907cd77c5a5126ceec, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:54,826 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=04f6129d49b9dd150b4925489b877f85, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:54,826 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=61d04f50bda08b77d69b4e57e8f96fd8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:54,826 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099414826"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099414826"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099414826"}]},"ts":"1689099414826"} 2023-07-11 18:16:54,826 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099414825"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099414825"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099414825"}]},"ts":"1689099414825"} 2023-07-11 18:16:54,826 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=026d87329b3162df89bd14e7d23514f2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:54,826 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=8ec95adb2f471e095141858fd6142996, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:54,826 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099414825"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099414825"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099414825"}]},"ts":"1689099414825"} 2023-07-11 18:16:54,826 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099414825"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099414825"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099414825"}]},"ts":"1689099414825"} 2023-07-11 18:16:54,827 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099414826"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099414826"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099414826"}]},"ts":"1689099414826"} 2023-07-11 18:16:54,829 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=21, state=RUNNABLE; OpenRegionProcedure 04f6129d49b9dd150b4925489b877f85, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:16:54,831 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=23, state=RUNNABLE; OpenRegionProcedure 61d04f50bda08b77d69b4e57e8f96fd8, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:16:54,832 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=26, ppid=22, state=RUNNABLE; OpenRegionProcedure 026d87329b3162df89bd14e7d23514f2, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:16:54,836 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=20, state=RUNNABLE; OpenRegionProcedure 32a6515c1e6bd1907cd77c5a5126ceec, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:16:54,838 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=19, state=RUNNABLE; OpenRegionProcedure 8ec95adb2f471e095141858fd6142996, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:16:54,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-11 18:16:54,987 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. 2023-07-11 18:16:54,987 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. 2023-07-11 18:16:54,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 61d04f50bda08b77d69b4e57e8f96fd8, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-11 18:16:54,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 026d87329b3162df89bd14e7d23514f2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-11 18:16:54,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:54,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:54,988 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:54,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:54,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:54,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:54,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:54,989 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:54,993 INFO [StoreOpener-61d04f50bda08b77d69b4e57e8f96fd8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:54,993 INFO [StoreOpener-026d87329b3162df89bd14e7d23514f2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:54,995 DEBUG [StoreOpener-61d04f50bda08b77d69b4e57e8f96fd8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8/f 2023-07-11 18:16:54,996 DEBUG [StoreOpener-026d87329b3162df89bd14e7d23514f2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2/f 2023-07-11 18:16:54,996 DEBUG [StoreOpener-61d04f50bda08b77d69b4e57e8f96fd8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8/f 2023-07-11 18:16:54,996 DEBUG [StoreOpener-026d87329b3162df89bd14e7d23514f2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2/f 2023-07-11 18:16:54,996 INFO [StoreOpener-61d04f50bda08b77d69b4e57e8f96fd8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 61d04f50bda08b77d69b4e57e8f96fd8 columnFamilyName f 2023-07-11 18:16:54,996 INFO [StoreOpener-026d87329b3162df89bd14e7d23514f2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 026d87329b3162df89bd14e7d23514f2 columnFamilyName f 2023-07-11 18:16:55,000 INFO [StoreOpener-61d04f50bda08b77d69b4e57e8f96fd8-1] regionserver.HStore(310): Store=61d04f50bda08b77d69b4e57e8f96fd8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:55,000 INFO [StoreOpener-026d87329b3162df89bd14e7d23514f2-1] regionserver.HStore(310): Store=026d87329b3162df89bd14e7d23514f2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:55,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:55,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:55,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:55,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:55,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:55,009 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:55,015 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:16:55,016 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 61d04f50bda08b77d69b4e57e8f96fd8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10924384960, jitterRate=0.017412632703781128}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:16:55,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 61d04f50bda08b77d69b4e57e8f96fd8: 2023-07-11 18:16:55,017 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:16:55,017 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 026d87329b3162df89bd14e7d23514f2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10602686720, jitterRate=-0.012547850608825684}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:16:55,018 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 026d87329b3162df89bd14e7d23514f2: 2023-07-11 18:16:55,018 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8., pid=25, masterSystemTime=1689099414983 2023-07-11 18:16:55,019 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2., pid=26, masterSystemTime=1689099414982 2023-07-11 18:16:55,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. 2023-07-11 18:16:55,020 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. 2023-07-11 18:16:55,020 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. 2023-07-11 18:16:55,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 32a6515c1e6bd1907cd77c5a5126ceec, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-11 18:16:55,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:55,021 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=61d04f50bda08b77d69b4e57e8f96fd8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:55,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:55,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:55,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:55,021 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099415021"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099415021"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099415021"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099415021"}]},"ts":"1689099415021"} 2023-07-11 18:16:55,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. 2023-07-11 18:16:55,022 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. 2023-07-11 18:16:55,022 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. 2023-07-11 18:16:55,022 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 04f6129d49b9dd150b4925489b877f85, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-11 18:16:55,022 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:55,022 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:55,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:55,023 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=026d87329b3162df89bd14e7d23514f2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:55,027 INFO [StoreOpener-32a6515c1e6bd1907cd77c5a5126ceec-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:55,028 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099415022"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099415022"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099415022"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099415022"}]},"ts":"1689099415022"} 2023-07-11 18:16:55,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:55,029 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=23 2023-07-11 18:16:55,031 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=23, state=SUCCESS; OpenRegionProcedure 61d04f50bda08b77d69b4e57e8f96fd8, server=jenkins-hbase4.apache.org,45821,1689099407865 in 193 msec 2023-07-11 18:16:55,032 DEBUG [StoreOpener-32a6515c1e6bd1907cd77c5a5126ceec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec/f 2023-07-11 18:16:55,033 DEBUG [StoreOpener-32a6515c1e6bd1907cd77c5a5126ceec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec/f 2023-07-11 18:16:55,034 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d04f50bda08b77d69b4e57e8f96fd8, ASSIGN in 363 msec 2023-07-11 18:16:55,034 INFO [StoreOpener-04f6129d49b9dd150b4925489b877f85-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:55,034 INFO [StoreOpener-32a6515c1e6bd1907cd77c5a5126ceec-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 32a6515c1e6bd1907cd77c5a5126ceec columnFamilyName f 2023-07-11 18:16:55,036 INFO [StoreOpener-32a6515c1e6bd1907cd77c5a5126ceec-1] regionserver.HStore(310): Store=32a6515c1e6bd1907cd77c5a5126ceec/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:55,037 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=22 2023-07-11 18:16:55,037 DEBUG [StoreOpener-04f6129d49b9dd150b4925489b877f85-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85/f 2023-07-11 18:16:55,039 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=22, state=SUCCESS; OpenRegionProcedure 026d87329b3162df89bd14e7d23514f2, server=jenkins-hbase4.apache.org,45471,1689099407428 in 201 msec 2023-07-11 18:16:55,039 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:55,039 DEBUG [StoreOpener-04f6129d49b9dd150b4925489b877f85-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85/f 2023-07-11 18:16:55,040 INFO [StoreOpener-04f6129d49b9dd150b4925489b877f85-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 04f6129d49b9dd150b4925489b877f85 columnFamilyName f 2023-07-11 18:16:55,040 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:55,041 INFO [StoreOpener-04f6129d49b9dd150b4925489b877f85-1] regionserver.HStore(310): Store=04f6129d49b9dd150b4925489b877f85/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:55,042 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=026d87329b3162df89bd14e7d23514f2, ASSIGN in 371 msec 2023-07-11 18:16:55,043 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:55,044 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:55,047 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:55,051 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:55,081 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:16:55,083 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:16:55,084 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 04f6129d49b9dd150b4925489b877f85; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10453057280, jitterRate=-0.02648317813873291}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:16:55,084 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 04f6129d49b9dd150b4925489b877f85: 2023-07-11 18:16:55,085 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 32a6515c1e6bd1907cd77c5a5126ceec; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9839488480, jitterRate=-0.08362622559070587}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:16:55,085 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 32a6515c1e6bd1907cd77c5a5126ceec: 2023-07-11 18:16:55,085 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85., pid=24, masterSystemTime=1689099414982 2023-07-11 18:16:55,088 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec., pid=27, masterSystemTime=1689099414983 2023-07-11 18:16:55,090 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. 2023-07-11 18:16:55,091 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. 2023-07-11 18:16:55,091 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. 2023-07-11 18:16:55,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8ec95adb2f471e095141858fd6142996, NAME => 'Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-11 18:16:55,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:55,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:55,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:55,092 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:55,092 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=04f6129d49b9dd150b4925489b877f85, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:55,093 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099415092"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099415092"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099415092"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099415092"}]},"ts":"1689099415092"} 2023-07-11 18:16:55,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. 2023-07-11 18:16:55,094 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. 2023-07-11 18:16:55,095 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=32a6515c1e6bd1907cd77c5a5126ceec, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:55,096 INFO [StoreOpener-8ec95adb2f471e095141858fd6142996-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:55,096 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099415095"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099415095"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099415095"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099415095"}]},"ts":"1689099415095"} 2023-07-11 18:16:55,099 DEBUG [StoreOpener-8ec95adb2f471e095141858fd6142996-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996/f 2023-07-11 18:16:55,099 DEBUG [StoreOpener-8ec95adb2f471e095141858fd6142996-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996/f 2023-07-11 18:16:55,099 INFO [StoreOpener-8ec95adb2f471e095141858fd6142996-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8ec95adb2f471e095141858fd6142996 columnFamilyName f 2023-07-11 18:16:55,100 INFO [StoreOpener-8ec95adb2f471e095141858fd6142996-1] regionserver.HStore(310): Store=8ec95adb2f471e095141858fd6142996/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:55,101 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:55,104 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:55,107 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=21 2023-07-11 18:16:55,107 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=21, state=SUCCESS; OpenRegionProcedure 04f6129d49b9dd150b4925489b877f85, server=jenkins-hbase4.apache.org,45471,1689099407428 in 268 msec 2023-07-11 18:16:55,109 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:55,111 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=20 2023-07-11 18:16:55,111 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=20, state=SUCCESS; OpenRegionProcedure 32a6515c1e6bd1907cd77c5a5126ceec, server=jenkins-hbase4.apache.org,45821,1689099407865 in 264 msec 2023-07-11 18:16:55,112 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=04f6129d49b9dd150b4925489b877f85, ASSIGN in 441 msec 2023-07-11 18:16:55,116 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=32a6515c1e6bd1907cd77c5a5126ceec, ASSIGN in 445 msec 2023-07-11 18:16:55,118 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:16:55,119 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8ec95adb2f471e095141858fd6142996; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11559682560, jitterRate=0.07657933235168457}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:16:55,120 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8ec95adb2f471e095141858fd6142996: 2023-07-11 18:16:55,121 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996., pid=28, masterSystemTime=1689099414982 2023-07-11 18:16:55,123 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. 2023-07-11 18:16:55,123 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. 2023-07-11 18:16:55,124 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=8ec95adb2f471e095141858fd6142996, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:55,125 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099415124"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099415124"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099415124"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099415124"}]},"ts":"1689099415124"} 2023-07-11 18:16:55,130 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=19 2023-07-11 18:16:55,131 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=19, state=SUCCESS; OpenRegionProcedure 8ec95adb2f471e095141858fd6142996, server=jenkins-hbase4.apache.org,45471,1689099407428 in 289 msec 2023-07-11 18:16:55,134 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=19, resume processing ppid=18 2023-07-11 18:16:55,134 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8ec95adb2f471e095141858fd6142996, ASSIGN in 465 msec 2023-07-11 18:16:55,135 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 18:16:55,135 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099415135"}]},"ts":"1689099415135"} 2023-07-11 18:16:55,137 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-11 18:16:55,140 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 18:16:55,142 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 820 msec 2023-07-11 18:16:55,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-11 18:16:55,447 INFO [Listener at localhost/35107] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 18 completed 2023-07-11 18:16:55,447 DEBUG [Listener at localhost/35107] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-11 18:16:55,449 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:16:55,450 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37773] ipc.CallRunner(144): callId: 49 service: ClientService methodName: Scan size: 95 connection: 172.31.14.131:46112 deadline: 1689099475450, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45471 startCode=1689099407428. As of locationSeqNum=15. 2023-07-11 18:16:55,554 DEBUG [hconnection-0x7b3db8b3-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 18:16:55,564 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53286, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 18:16:55,576 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-11 18:16:55,577 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:16:55,577 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-11 18:16:55,578 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:16:55,584 DEBUG [Listener at localhost/35107] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 18:16:55,589 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46140, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 18:16:55,593 DEBUG [Listener at localhost/35107] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 18:16:55,597 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60490, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 18:16:55,598 DEBUG [Listener at localhost/35107] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 18:16:55,606 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:53288, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 18:16:55,608 DEBUG [Listener at localhost/35107] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 18:16:55,611 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43070, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 18:16:55,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-11 18:16:55,626 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 18:16:55,627 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:55,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:55,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:55,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:16:55,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:55,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:16:55,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:55,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(345): Moving region 8ec95adb2f471e095141858fd6142996 to RSGroup Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:55,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:16:55,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:16:55,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:16:55,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:16:55,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:16:55,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8ec95adb2f471e095141858fd6142996, REOPEN/MOVE 2023-07-11 18:16:55,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(345): Moving region 32a6515c1e6bd1907cd77c5a5126ceec to RSGroup Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:55,654 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8ec95adb2f471e095141858fd6142996, REOPEN/MOVE 2023-07-11 18:16:55,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:16:55,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:16:55,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:16:55,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:16:55,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:16:55,655 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=8ec95adb2f471e095141858fd6142996, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:55,655 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099415655"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099415655"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099415655"}]},"ts":"1689099415655"} 2023-07-11 18:16:55,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=32a6515c1e6bd1907cd77c5a5126ceec, REOPEN/MOVE 2023-07-11 18:16:55,656 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(345): Moving region 04f6129d49b9dd150b4925489b877f85 to RSGroup Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:55,657 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=32a6515c1e6bd1907cd77c5a5126ceec, REOPEN/MOVE 2023-07-11 18:16:55,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:16:55,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:16:55,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:16:55,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:16:55,657 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:16:55,658 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=32a6515c1e6bd1907cd77c5a5126ceec, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:55,659 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=29, state=RUNNABLE; CloseRegionProcedure 8ec95adb2f471e095141858fd6142996, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:16:55,659 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=04f6129d49b9dd150b4925489b877f85, REOPEN/MOVE 2023-07-11 18:16:55,659 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099415658"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099415658"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099415658"}]},"ts":"1689099415658"} 2023-07-11 18:16:55,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(345): Moving region 026d87329b3162df89bd14e7d23514f2 to RSGroup Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:55,660 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=04f6129d49b9dd150b4925489b877f85, REOPEN/MOVE 2023-07-11 18:16:55,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:16:55,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:16:55,660 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:16:55,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:16:55,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:16:55,662 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=04f6129d49b9dd150b4925489b877f85, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:55,662 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099415662"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099415662"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099415662"}]},"ts":"1689099415662"} 2023-07-11 18:16:55,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=026d87329b3162df89bd14e7d23514f2, REOPEN/MOVE 2023-07-11 18:16:55,662 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(345): Moving region 61d04f50bda08b77d69b4e57e8f96fd8 to RSGroup Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:55,663 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=026d87329b3162df89bd14e7d23514f2, REOPEN/MOVE 2023-07-11 18:16:55,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:16:55,663 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=30, state=RUNNABLE; CloseRegionProcedure 32a6515c1e6bd1907cd77c5a5126ceec, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:16:55,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:16:55,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:16:55,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:16:55,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:16:55,666 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=026d87329b3162df89bd14e7d23514f2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:55,666 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=31, state=RUNNABLE; CloseRegionProcedure 04f6129d49b9dd150b4925489b877f85, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:16:55,666 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099415666"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099415666"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099415666"}]},"ts":"1689099415666"} 2023-07-11 18:16:55,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d04f50bda08b77d69b4e57e8f96fd8, REOPEN/MOVE 2023-07-11 18:16:55,668 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_947841941, current retry=0 2023-07-11 18:16:55,669 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d04f50bda08b77d69b4e57e8f96fd8, REOPEN/MOVE 2023-07-11 18:16:55,670 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=33, state=RUNNABLE; CloseRegionProcedure 026d87329b3162df89bd14e7d23514f2, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:16:55,671 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=61d04f50bda08b77d69b4e57e8f96fd8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:16:55,671 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099415671"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099415671"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099415671"}]},"ts":"1689099415671"} 2023-07-11 18:16:55,678 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=35, state=RUNNABLE; CloseRegionProcedure 61d04f50bda08b77d69b4e57e8f96fd8, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:16:55,756 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-11 18:16:55,816 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:55,818 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8ec95adb2f471e095141858fd6142996, disabling compactions & flushes 2023-07-11 18:16:55,819 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. 2023-07-11 18:16:55,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. 2023-07-11 18:16:55,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. after waiting 0 ms 2023-07-11 18:16:55,819 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. 2023-07-11 18:16:55,822 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:55,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 32a6515c1e6bd1907cd77c5a5126ceec, disabling compactions & flushes 2023-07-11 18:16:55,823 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. 2023-07-11 18:16:55,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. 2023-07-11 18:16:55,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. after waiting 0 ms 2023-07-11 18:16:55,823 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. 2023-07-11 18:16:55,829 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:16:55,831 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. 2023-07-11 18:16:55,831 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8ec95adb2f471e095141858fd6142996: 2023-07-11 18:16:55,831 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 8ec95adb2f471e095141858fd6142996 move to jenkins-hbase4.apache.org,37889,1689099411612 record at close sequenceid=2 2023-07-11 18:16:55,833 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testTableMoveTruncateAndDrop' 2023-07-11 18:16:55,834 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-11 18:16:55,835 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:55,836 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:55,837 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 026d87329b3162df89bd14e7d23514f2, disabling compactions & flushes 2023-07-11 18:16:55,837 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:16:55,837 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. 2023-07-11 18:16:55,837 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-11 18:16:55,837 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. 2023-07-11 18:16:55,837 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. after waiting 0 ms 2023-07-11 18:16:55,837 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=8ec95adb2f471e095141858fd6142996, regionState=CLOSED 2023-07-11 18:16:55,837 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. 2023-07-11 18:16:55,837 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099415837"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099415837"}]},"ts":"1689099415837"} 2023-07-11 18:16:55,837 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-11 18:16:55,838 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 18:16:55,838 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-11 18:16:55,839 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-11 18:16:55,839 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-11 18:16:55,840 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. 2023-07-11 18:16:55,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 32a6515c1e6bd1907cd77c5a5126ceec: 2023-07-11 18:16:55,840 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 32a6515c1e6bd1907cd77c5a5126ceec move to jenkins-hbase4.apache.org,37773,1689099407650 record at close sequenceid=2 2023-07-11 18:16:55,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:55,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:55,843 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=32a6515c1e6bd1907cd77c5a5126ceec, regionState=CLOSED 2023-07-11 18:16:55,843 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099415843"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099415843"}]},"ts":"1689099415843"} 2023-07-11 18:16:55,844 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=29 2023-07-11 18:16:55,844 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=29, state=SUCCESS; CloseRegionProcedure 8ec95adb2f471e095141858fd6142996, server=jenkins-hbase4.apache.org,45471,1689099407428 in 182 msec 2023-07-11 18:16:55,845 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8ec95adb2f471e095141858fd6142996, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37889,1689099411612; forceNewPlan=false, retain=false 2023-07-11 18:16:55,848 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=30 2023-07-11 18:16:55,848 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=30, state=SUCCESS; CloseRegionProcedure 32a6515c1e6bd1907cd77c5a5126ceec, server=jenkins-hbase4.apache.org,45821,1689099407865 in 182 msec 2023-07-11 18:16:55,849 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=32a6515c1e6bd1907cd77c5a5126ceec, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37773,1689099407650; forceNewPlan=false, retain=false 2023-07-11 18:16:55,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 61d04f50bda08b77d69b4e57e8f96fd8, disabling compactions & flushes 2023-07-11 18:16:55,850 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. 2023-07-11 18:16:55,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. 2023-07-11 18:16:55,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. after waiting 0 ms 2023-07-11 18:16:55,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. 2023-07-11 18:16:55,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:16:55,857 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. 2023-07-11 18:16:55,857 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 026d87329b3162df89bd14e7d23514f2: 2023-07-11 18:16:55,858 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 026d87329b3162df89bd14e7d23514f2 move to jenkins-hbase4.apache.org,37773,1689099407650 record at close sequenceid=2 2023-07-11 18:16:55,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:16:55,862 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:55,862 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:55,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 04f6129d49b9dd150b4925489b877f85, disabling compactions & flushes 2023-07-11 18:16:55,863 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. 2023-07-11 18:16:55,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. 2023-07-11 18:16:55,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. after waiting 0 ms 2023-07-11 18:16:55,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. 2023-07-11 18:16:55,863 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=026d87329b3162df89bd14e7d23514f2, regionState=CLOSED 2023-07-11 18:16:55,863 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099415863"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099415863"}]},"ts":"1689099415863"} 2023-07-11 18:16:55,865 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. 2023-07-11 18:16:55,865 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 61d04f50bda08b77d69b4e57e8f96fd8: 2023-07-11 18:16:55,865 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 61d04f50bda08b77d69b4e57e8f96fd8 move to jenkins-hbase4.apache.org,37773,1689099407650 record at close sequenceid=2 2023-07-11 18:16:55,868 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:55,870 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=61d04f50bda08b77d69b4e57e8f96fd8, regionState=CLOSED 2023-07-11 18:16:55,870 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099415870"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099415870"}]},"ts":"1689099415870"} 2023-07-11 18:16:55,872 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:16:55,872 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=33 2023-07-11 18:16:55,872 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=33, state=SUCCESS; CloseRegionProcedure 026d87329b3162df89bd14e7d23514f2, server=jenkins-hbase4.apache.org,45471,1689099407428 in 197 msec 2023-07-11 18:16:55,873 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. 2023-07-11 18:16:55,873 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 04f6129d49b9dd150b4925489b877f85: 2023-07-11 18:16:55,873 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 04f6129d49b9dd150b4925489b877f85 move to jenkins-hbase4.apache.org,37773,1689099407650 record at close sequenceid=2 2023-07-11 18:16:55,874 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=026d87329b3162df89bd14e7d23514f2, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37773,1689099407650; forceNewPlan=false, retain=false 2023-07-11 18:16:55,877 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:55,878 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=04f6129d49b9dd150b4925489b877f85, regionState=CLOSED 2023-07-11 18:16:55,878 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=35 2023-07-11 18:16:55,878 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099415878"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099415878"}]},"ts":"1689099415878"} 2023-07-11 18:16:55,878 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=35, state=SUCCESS; CloseRegionProcedure 61d04f50bda08b77d69b4e57e8f96fd8, server=jenkins-hbase4.apache.org,45821,1689099407865 in 196 msec 2023-07-11 18:16:55,882 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d04f50bda08b77d69b4e57e8f96fd8, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37773,1689099407650; forceNewPlan=false, retain=false 2023-07-11 18:16:55,884 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=31 2023-07-11 18:16:55,884 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=31, state=SUCCESS; CloseRegionProcedure 04f6129d49b9dd150b4925489b877f85, server=jenkins-hbase4.apache.org,45471,1689099407428 in 214 msec 2023-07-11 18:16:55,885 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=04f6129d49b9dd150b4925489b877f85, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37773,1689099407650; forceNewPlan=false, retain=false 2023-07-11 18:16:55,996 INFO [jenkins-hbase4:45397] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-11 18:16:55,996 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=04f6129d49b9dd150b4925489b877f85, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:55,997 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099415996"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099415996"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099415996"}]},"ts":"1689099415996"} 2023-07-11 18:16:55,996 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=026d87329b3162df89bd14e7d23514f2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:55,997 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099415996"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099415996"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099415996"}]},"ts":"1689099415996"} 2023-07-11 18:16:55,996 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=61d04f50bda08b77d69b4e57e8f96fd8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:55,996 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=32a6515c1e6bd1907cd77c5a5126ceec, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:55,996 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=8ec95adb2f471e095141858fd6142996, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:55,998 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099415996"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099415996"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099415996"}]},"ts":"1689099415996"} 2023-07-11 18:16:55,998 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099415996"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099415996"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099415996"}]},"ts":"1689099415996"} 2023-07-11 18:16:55,998 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099415996"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099415996"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099415996"}]},"ts":"1689099415996"} 2023-07-11 18:16:56,000 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=31, state=RUNNABLE; OpenRegionProcedure 04f6129d49b9dd150b4925489b877f85, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:16:56,002 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=33, state=RUNNABLE; OpenRegionProcedure 026d87329b3162df89bd14e7d23514f2, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:16:56,003 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=30, state=RUNNABLE; OpenRegionProcedure 32a6515c1e6bd1907cd77c5a5126ceec, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:16:56,005 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=35, state=RUNNABLE; OpenRegionProcedure 61d04f50bda08b77d69b4e57e8f96fd8, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:16:56,008 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=29, state=RUNNABLE; OpenRegionProcedure 8ec95adb2f471e095141858fd6142996, server=jenkins-hbase4.apache.org,37889,1689099411612}] 2023-07-11 18:16:56,163 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:56,164 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 18:16:56,165 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. 2023-07-11 18:16:56,165 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60492, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 18:16:56,165 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 04f6129d49b9dd150b4925489b877f85, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-11 18:16:56,166 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:56,166 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:56,166 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:56,166 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:56,169 INFO [StoreOpener-04f6129d49b9dd150b4925489b877f85-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:56,171 DEBUG [StoreOpener-04f6129d49b9dd150b4925489b877f85-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85/f 2023-07-11 18:16:56,171 DEBUG [StoreOpener-04f6129d49b9dd150b4925489b877f85-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85/f 2023-07-11 18:16:56,172 INFO [StoreOpener-04f6129d49b9dd150b4925489b877f85-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 04f6129d49b9dd150b4925489b877f85 columnFamilyName f 2023-07-11 18:16:56,173 INFO [StoreOpener-04f6129d49b9dd150b4925489b877f85-1] regionserver.HStore(310): Store=04f6129d49b9dd150b4925489b877f85/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:56,178 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. 2023-07-11 18:16:56,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8ec95adb2f471e095141858fd6142996, NAME => 'Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-11 18:16:56,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:56,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:56,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:56,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:56,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:56,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:56,186 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:56,187 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 04f6129d49b9dd150b4925489b877f85; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11338654560, jitterRate=0.05599449574947357}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:16:56,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 04f6129d49b9dd150b4925489b877f85: 2023-07-11 18:16:56,189 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85., pid=39, masterSystemTime=1689099416157 2023-07-11 18:16:56,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. 2023-07-11 18:16:56,191 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. 2023-07-11 18:16:56,191 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. 2023-07-11 18:16:56,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 026d87329b3162df89bd14e7d23514f2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-11 18:16:56,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:56,192 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=04f6129d49b9dd150b4925489b877f85, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:56,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:56,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:56,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:56,192 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099416191"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099416191"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099416191"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099416191"}]},"ts":"1689099416191"} 2023-07-11 18:16:56,198 INFO [StoreOpener-8ec95adb2f471e095141858fd6142996-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:56,199 INFO [StoreOpener-026d87329b3162df89bd14e7d23514f2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:56,200 DEBUG [StoreOpener-8ec95adb2f471e095141858fd6142996-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996/f 2023-07-11 18:16:56,200 DEBUG [StoreOpener-8ec95adb2f471e095141858fd6142996-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996/f 2023-07-11 18:16:56,201 DEBUG [StoreOpener-026d87329b3162df89bd14e7d23514f2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2/f 2023-07-11 18:16:56,201 DEBUG [StoreOpener-026d87329b3162df89bd14e7d23514f2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2/f 2023-07-11 18:16:56,201 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=31 2023-07-11 18:16:56,201 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=31, state=SUCCESS; OpenRegionProcedure 04f6129d49b9dd150b4925489b877f85, server=jenkins-hbase4.apache.org,37773,1689099407650 in 198 msec 2023-07-11 18:16:56,202 INFO [StoreOpener-026d87329b3162df89bd14e7d23514f2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 026d87329b3162df89bd14e7d23514f2 columnFamilyName f 2023-07-11 18:16:56,203 INFO [StoreOpener-026d87329b3162df89bd14e7d23514f2-1] regionserver.HStore(310): Store=026d87329b3162df89bd14e7d23514f2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:56,204 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=31, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=04f6129d49b9dd150b4925489b877f85, REOPEN/MOVE in 543 msec 2023-07-11 18:16:56,207 INFO [StoreOpener-8ec95adb2f471e095141858fd6142996-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8ec95adb2f471e095141858fd6142996 columnFamilyName f 2023-07-11 18:16:56,208 INFO [StoreOpener-8ec95adb2f471e095141858fd6142996-1] regionserver.HStore(310): Store=8ec95adb2f471e095141858fd6142996/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:56,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:56,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:56,211 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:56,212 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:56,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:56,218 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 026d87329b3162df89bd14e7d23514f2; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10667570880, jitterRate=-0.006505042314529419}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:16:56,218 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 026d87329b3162df89bd14e7d23514f2: 2023-07-11 18:16:56,219 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2., pid=40, masterSystemTime=1689099416157 2023-07-11 18:16:56,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:56,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. 2023-07-11 18:16:56,221 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. 2023-07-11 18:16:56,221 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. 2023-07-11 18:16:56,221 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 32a6515c1e6bd1907cd77c5a5126ceec, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-11 18:16:56,222 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:56,222 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=026d87329b3162df89bd14e7d23514f2, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:56,222 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:56,222 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:56,222 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099416222"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099416222"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099416222"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099416222"}]},"ts":"1689099416222"} 2023-07-11 18:16:56,222 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:56,223 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8ec95adb2f471e095141858fd6142996; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10472982400, jitterRate=-0.024627506732940674}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:16:56,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8ec95adb2f471e095141858fd6142996: 2023-07-11 18:16:56,224 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996., pid=43, masterSystemTime=1689099416163 2023-07-11 18:16:56,232 INFO [StoreOpener-32a6515c1e6bd1907cd77c5a5126ceec-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:56,233 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. 2023-07-11 18:16:56,234 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. 2023-07-11 18:16:56,234 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=8ec95adb2f471e095141858fd6142996, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:56,235 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099416234"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099416234"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099416234"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099416234"}]},"ts":"1689099416234"} 2023-07-11 18:16:56,235 DEBUG [StoreOpener-32a6515c1e6bd1907cd77c5a5126ceec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec/f 2023-07-11 18:16:56,235 DEBUG [StoreOpener-32a6515c1e6bd1907cd77c5a5126ceec-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec/f 2023-07-11 18:16:56,235 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=33 2023-07-11 18:16:56,236 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=33, state=SUCCESS; OpenRegionProcedure 026d87329b3162df89bd14e7d23514f2, server=jenkins-hbase4.apache.org,37773,1689099407650 in 228 msec 2023-07-11 18:16:56,236 INFO [StoreOpener-32a6515c1e6bd1907cd77c5a5126ceec-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 32a6515c1e6bd1907cd77c5a5126ceec columnFamilyName f 2023-07-11 18:16:56,237 INFO [StoreOpener-32a6515c1e6bd1907cd77c5a5126ceec-1] regionserver.HStore(310): Store=32a6515c1e6bd1907cd77c5a5126ceec/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:56,239 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=026d87329b3162df89bd14e7d23514f2, REOPEN/MOVE in 575 msec 2023-07-11 18:16:56,239 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:56,240 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=29 2023-07-11 18:16:56,240 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=29, state=SUCCESS; OpenRegionProcedure 8ec95adb2f471e095141858fd6142996, server=jenkins-hbase4.apache.org,37889,1689099411612 in 229 msec 2023-07-11 18:16:56,242 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:56,243 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8ec95adb2f471e095141858fd6142996, REOPEN/MOVE in 589 msec 2023-07-11 18:16:56,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:56,249 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 32a6515c1e6bd1907cd77c5a5126ceec; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12043450720, jitterRate=0.12163375318050385}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:16:56,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 32a6515c1e6bd1907cd77c5a5126ceec: 2023-07-11 18:16:56,250 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec., pid=41, masterSystemTime=1689099416157 2023-07-11 18:16:56,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. 2023-07-11 18:16:56,252 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. 2023-07-11 18:16:56,253 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. 2023-07-11 18:16:56,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 61d04f50bda08b77d69b4e57e8f96fd8, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-11 18:16:56,253 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=32a6515c1e6bd1907cd77c5a5126ceec, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:56,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:56,253 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099416253"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099416253"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099416253"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099416253"}]},"ts":"1689099416253"} 2023-07-11 18:16:56,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:56,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:56,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:56,256 INFO [StoreOpener-61d04f50bda08b77d69b4e57e8f96fd8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:56,258 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=30 2023-07-11 18:16:56,258 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=30, state=SUCCESS; OpenRegionProcedure 32a6515c1e6bd1907cd77c5a5126ceec, server=jenkins-hbase4.apache.org,37773,1689099407650 in 253 msec 2023-07-11 18:16:56,260 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=32a6515c1e6bd1907cd77c5a5126ceec, REOPEN/MOVE in 603 msec 2023-07-11 18:16:56,264 DEBUG [StoreOpener-61d04f50bda08b77d69b4e57e8f96fd8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8/f 2023-07-11 18:16:56,264 DEBUG [StoreOpener-61d04f50bda08b77d69b4e57e8f96fd8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8/f 2023-07-11 18:16:56,265 INFO [StoreOpener-61d04f50bda08b77d69b4e57e8f96fd8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 61d04f50bda08b77d69b4e57e8f96fd8 columnFamilyName f 2023-07-11 18:16:56,265 INFO [StoreOpener-61d04f50bda08b77d69b4e57e8f96fd8-1] regionserver.HStore(310): Store=61d04f50bda08b77d69b4e57e8f96fd8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:56,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:56,268 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:56,274 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:56,275 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 61d04f50bda08b77d69b4e57e8f96fd8; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11200863520, jitterRate=0.04316170513629913}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:16:56,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 61d04f50bda08b77d69b4e57e8f96fd8: 2023-07-11 18:16:56,278 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8., pid=42, masterSystemTime=1689099416157 2023-07-11 18:16:56,283 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=61d04f50bda08b77d69b4e57e8f96fd8, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:56,283 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099416282"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099416282"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099416282"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099416282"}]},"ts":"1689099416282"} 2023-07-11 18:16:56,283 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. 2023-07-11 18:16:56,283 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. 2023-07-11 18:16:56,289 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=35 2023-07-11 18:16:56,289 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=35, state=SUCCESS; OpenRegionProcedure 61d04f50bda08b77d69b4e57e8f96fd8, server=jenkins-hbase4.apache.org,37773,1689099407650 in 280 msec 2023-07-11 18:16:56,291 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=35, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d04f50bda08b77d69b4e57e8f96fd8, REOPEN/MOVE in 625 msec 2023-07-11 18:16:56,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure.ProcedureSyncWait(216): waitFor pid=29 2023-07-11 18:16:56,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_947841941. 2023-07-11 18:16:56,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:16:56,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:56,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:56,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-11 18:16:56,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 18:16:56,679 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:16:56,686 INFO [Listener at localhost/35107] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-11 18:16:56,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-11 18:16:56,698 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=44, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-11 18:16:56,703 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099416703"}]},"ts":"1689099416703"} 2023-07-11 18:16:56,704 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-11 18:16:56,705 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-11 18:16:56,707 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-11 18:16:56,712 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8ec95adb2f471e095141858fd6142996, UNASSIGN}, {pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=32a6515c1e6bd1907cd77c5a5126ceec, UNASSIGN}, {pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=04f6129d49b9dd150b4925489b877f85, UNASSIGN}, {pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=026d87329b3162df89bd14e7d23514f2, UNASSIGN}, {pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d04f50bda08b77d69b4e57e8f96fd8, UNASSIGN}] 2023-07-11 18:16:56,715 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=026d87329b3162df89bd14e7d23514f2, UNASSIGN 2023-07-11 18:16:56,715 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d04f50bda08b77d69b4e57e8f96fd8, UNASSIGN 2023-07-11 18:16:56,715 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=04f6129d49b9dd150b4925489b877f85, UNASSIGN 2023-07-11 18:16:56,715 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=32a6515c1e6bd1907cd77c5a5126ceec, UNASSIGN 2023-07-11 18:16:56,716 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8ec95adb2f471e095141858fd6142996, UNASSIGN 2023-07-11 18:16:56,717 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=04f6129d49b9dd150b4925489b877f85, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:56,717 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=8ec95adb2f471e095141858fd6142996, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:56,717 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=32a6515c1e6bd1907cd77c5a5126ceec, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:56,717 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099416717"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099416717"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099416717"}]},"ts":"1689099416717"} 2023-07-11 18:16:56,717 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099416717"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099416717"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099416717"}]},"ts":"1689099416717"} 2023-07-11 18:16:56,717 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=026d87329b3162df89bd14e7d23514f2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:56,718 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099416717"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099416717"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099416717"}]},"ts":"1689099416717"} 2023-07-11 18:16:56,717 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=61d04f50bda08b77d69b4e57e8f96fd8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:56,718 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099416717"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099416717"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099416717"}]},"ts":"1689099416717"} 2023-07-11 18:16:56,717 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099416717"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099416717"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099416717"}]},"ts":"1689099416717"} 2023-07-11 18:16:56,720 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=45, state=RUNNABLE; CloseRegionProcedure 8ec95adb2f471e095141858fd6142996, server=jenkins-hbase4.apache.org,37889,1689099411612}] 2023-07-11 18:16:56,720 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=46, state=RUNNABLE; CloseRegionProcedure 32a6515c1e6bd1907cd77c5a5126ceec, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:16:56,724 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=52, ppid=48, state=RUNNABLE; CloseRegionProcedure 026d87329b3162df89bd14e7d23514f2, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:16:56,724 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=49, state=RUNNABLE; CloseRegionProcedure 61d04f50bda08b77d69b4e57e8f96fd8, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:16:56,726 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=47, state=RUNNABLE; CloseRegionProcedure 04f6129d49b9dd150b4925489b877f85, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:16:56,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-11 18:16:56,874 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:56,875 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 32a6515c1e6bd1907cd77c5a5126ceec, disabling compactions & flushes 2023-07-11 18:16:56,876 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. 2023-07-11 18:16:56,876 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. 2023-07-11 18:16:56,876 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:56,876 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. after waiting 0 ms 2023-07-11 18:16:56,877 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. 2023-07-11 18:16:56,878 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8ec95adb2f471e095141858fd6142996, disabling compactions & flushes 2023-07-11 18:16:56,878 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. 2023-07-11 18:16:56,878 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. 2023-07-11 18:16:56,878 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. after waiting 0 ms 2023-07-11 18:16:56,878 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. 2023-07-11 18:16:56,883 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 18:16:56,884 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec. 2023-07-11 18:16:56,884 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 32a6515c1e6bd1907cd77c5a5126ceec: 2023-07-11 18:16:56,886 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 18:16:56,887 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996. 2023-07-11 18:16:56,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8ec95adb2f471e095141858fd6142996: 2023-07-11 18:16:56,887 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:56,887 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:56,889 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 04f6129d49b9dd150b4925489b877f85, disabling compactions & flushes 2023-07-11 18:16:56,889 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. 2023-07-11 18:16:56,889 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. 2023-07-11 18:16:56,889 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. after waiting 0 ms 2023-07-11 18:16:56,889 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. 2023-07-11 18:16:56,892 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=32a6515c1e6bd1907cd77c5a5126ceec, regionState=CLOSED 2023-07-11 18:16:56,892 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099416892"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099416892"}]},"ts":"1689099416892"} 2023-07-11 18:16:56,893 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:56,894 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=8ec95adb2f471e095141858fd6142996, regionState=CLOSED 2023-07-11 18:16:56,894 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099416894"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099416894"}]},"ts":"1689099416894"} 2023-07-11 18:16:56,896 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 18:16:56,897 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85. 2023-07-11 18:16:56,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 04f6129d49b9dd150b4925489b877f85: 2023-07-11 18:16:56,901 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=46 2023-07-11 18:16:56,901 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=46, state=SUCCESS; CloseRegionProcedure 32a6515c1e6bd1907cd77c5a5126ceec, server=jenkins-hbase4.apache.org,37773,1689099407650 in 175 msec 2023-07-11 18:16:56,903 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=45 2023-07-11 18:16:56,903 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=04f6129d49b9dd150b4925489b877f85, regionState=CLOSED 2023-07-11 18:16:56,903 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=45, state=SUCCESS; CloseRegionProcedure 8ec95adb2f471e095141858fd6142996, server=jenkins-hbase4.apache.org,37889,1689099411612 in 178 msec 2023-07-11 18:16:56,903 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099416903"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099416903"}]},"ts":"1689099416903"} 2023-07-11 18:16:56,904 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=32a6515c1e6bd1907cd77c5a5126ceec, UNASSIGN in 192 msec 2023-07-11 18:16:56,906 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8ec95adb2f471e095141858fd6142996, UNASSIGN in 194 msec 2023-07-11 18:16:56,907 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:56,907 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:56,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 026d87329b3162df89bd14e7d23514f2, disabling compactions & flushes 2023-07-11 18:16:56,908 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. 2023-07-11 18:16:56,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. 2023-07-11 18:16:56,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. after waiting 0 ms 2023-07-11 18:16:56,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. 2023-07-11 18:16:56,909 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=47 2023-07-11 18:16:56,909 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=47, state=SUCCESS; CloseRegionProcedure 04f6129d49b9dd150b4925489b877f85, server=jenkins-hbase4.apache.org,37773,1689099407650 in 180 msec 2023-07-11 18:16:56,911 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=04f6129d49b9dd150b4925489b877f85, UNASSIGN in 200 msec 2023-07-11 18:16:56,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 18:16:56,918 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2. 2023-07-11 18:16:56,918 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 026d87329b3162df89bd14e7d23514f2: 2023-07-11 18:16:56,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:56,920 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:56,921 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 61d04f50bda08b77d69b4e57e8f96fd8, disabling compactions & flushes 2023-07-11 18:16:56,921 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. 2023-07-11 18:16:56,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. 2023-07-11 18:16:56,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. after waiting 0 ms 2023-07-11 18:16:56,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. 2023-07-11 18:16:56,922 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=026d87329b3162df89bd14e7d23514f2, regionState=CLOSED 2023-07-11 18:16:56,922 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099416922"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099416922"}]},"ts":"1689099416922"} 2023-07-11 18:16:56,927 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=52, resume processing ppid=48 2023-07-11 18:16:56,927 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=48, state=SUCCESS; CloseRegionProcedure 026d87329b3162df89bd14e7d23514f2, server=jenkins-hbase4.apache.org,37773,1689099407650 in 200 msec 2023-07-11 18:16:56,927 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 18:16:56,929 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8. 2023-07-11 18:16:56,929 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=026d87329b3162df89bd14e7d23514f2, UNASSIGN in 218 msec 2023-07-11 18:16:56,929 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 61d04f50bda08b77d69b4e57e8f96fd8: 2023-07-11 18:16:56,930 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:56,931 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=61d04f50bda08b77d69b4e57e8f96fd8, regionState=CLOSED 2023-07-11 18:16:56,931 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099416931"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099416931"}]},"ts":"1689099416931"} 2023-07-11 18:16:56,935 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=49 2023-07-11 18:16:56,935 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=49, state=SUCCESS; CloseRegionProcedure 61d04f50bda08b77d69b4e57e8f96fd8, server=jenkins-hbase4.apache.org,37773,1689099407650 in 209 msec 2023-07-11 18:16:56,938 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=44 2023-07-11 18:16:56,938 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=61d04f50bda08b77d69b4e57e8f96fd8, UNASSIGN in 226 msec 2023-07-11 18:16:56,939 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099416939"}]},"ts":"1689099416939"} 2023-07-11 18:16:56,941 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-11 18:16:56,944 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-11 18:16:56,949 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=44, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 253 msec 2023-07-11 18:16:57,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-11 18:16:57,008 INFO [Listener at localhost/35107] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 44 completed 2023-07-11 18:16:57,010 INFO [Listener at localhost/35107] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-11 18:16:57,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-11 18:16:57,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=55, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-11 18:16:57,026 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-11 18:16:57,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-11 18:16:57,040 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:57,040 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:57,040 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:57,040 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:57,040 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:57,045 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec/f, FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec/recovered.edits] 2023-07-11 18:16:57,045 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996/f, FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996/recovered.edits] 2023-07-11 18:16:57,045 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8/f, FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8/recovered.edits] 2023-07-11 18:16:57,046 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85/f, FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85/recovered.edits] 2023-07-11 18:16:57,054 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2/f, FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2/recovered.edits] 2023-07-11 18:16:57,062 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8/recovered.edits/7.seqid to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/archive/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8/recovered.edits/7.seqid 2023-07-11 18:16:57,062 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec/recovered.edits/7.seqid to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/archive/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec/recovered.edits/7.seqid 2023-07-11 18:16:57,062 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85/recovered.edits/7.seqid to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/archive/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85/recovered.edits/7.seqid 2023-07-11 18:16:57,064 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/61d04f50bda08b77d69b4e57e8f96fd8 2023-07-11 18:16:57,064 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/32a6515c1e6bd1907cd77c5a5126ceec 2023-07-11 18:16:57,064 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/04f6129d49b9dd150b4925489b877f85 2023-07-11 18:16:57,064 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996/recovered.edits/7.seqid to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/archive/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996/recovered.edits/7.seqid 2023-07-11 18:16:57,065 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8ec95adb2f471e095141858fd6142996 2023-07-11 18:16:57,069 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2/recovered.edits/7.seqid to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/archive/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2/recovered.edits/7.seqid 2023-07-11 18:16:57,069 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/026d87329b3162df89bd14e7d23514f2 2023-07-11 18:16:57,069 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-11 18:16:57,097 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-11 18:16:57,102 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-11 18:16:57,102 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-11 18:16:57,103 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099417102"}]},"ts":"9223372036854775807"} 2023-07-11 18:16:57,103 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099417102"}]},"ts":"9223372036854775807"} 2023-07-11 18:16:57,103 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099417102"}]},"ts":"9223372036854775807"} 2023-07-11 18:16:57,103 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099417102"}]},"ts":"9223372036854775807"} 2023-07-11 18:16:57,103 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099417102"}]},"ts":"9223372036854775807"} 2023-07-11 18:16:57,105 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-11 18:16:57,106 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 8ec95adb2f471e095141858fd6142996, NAME => 'Group_testTableMoveTruncateAndDrop,,1689099414317.8ec95adb2f471e095141858fd6142996.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 32a6515c1e6bd1907cd77c5a5126ceec, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689099414317.32a6515c1e6bd1907cd77c5a5126ceec.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 04f6129d49b9dd150b4925489b877f85, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099414317.04f6129d49b9dd150b4925489b877f85.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 026d87329b3162df89bd14e7d23514f2, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099414317.026d87329b3162df89bd14e7d23514f2.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 61d04f50bda08b77d69b4e57e8f96fd8, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689099414317.61d04f50bda08b77d69b4e57e8f96fd8.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-11 18:16:57,106 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-11 18:16:57,106 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689099417106"}]},"ts":"9223372036854775807"} 2023-07-11 18:16:57,108 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-11 18:16:57,116 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/93117f2f0eac5f4b25372f994df8b5d4 2023-07-11 18:16:57,116 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/45884730e61070d79b55e19fd1e290d5 2023-07-11 18:16:57,116 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2b313e94e3bb9dfe764f7911bb7dba9b 2023-07-11 18:16:57,116 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4065515675c69107706229ae27cf88f8 2023-07-11 18:16:57,116 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/09347b68b7c9df4b68267ed426f900d3 2023-07-11 18:16:57,117 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/93117f2f0eac5f4b25372f994df8b5d4 empty. 2023-07-11 18:16:57,117 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2b313e94e3bb9dfe764f7911bb7dba9b empty. 2023-07-11 18:16:57,117 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/45884730e61070d79b55e19fd1e290d5 empty. 2023-07-11 18:16:57,117 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/09347b68b7c9df4b68267ed426f900d3 empty. 2023-07-11 18:16:57,117 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4065515675c69107706229ae27cf88f8 empty. 2023-07-11 18:16:57,117 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/93117f2f0eac5f4b25372f994df8b5d4 2023-07-11 18:16:57,117 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2b313e94e3bb9dfe764f7911bb7dba9b 2023-07-11 18:16:57,117 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/09347b68b7c9df4b68267ed426f900d3 2023-07-11 18:16:57,118 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/45884730e61070d79b55e19fd1e290d5 2023-07-11 18:16:57,118 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4065515675c69107706229ae27cf88f8 2023-07-11 18:16:57,118 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-11 18:16:57,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-11 18:16:57,147 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-11 18:16:57,150 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 4065515675c69107706229ae27cf88f8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099417072.4065515675c69107706229ae27cf88f8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:16:57,150 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 09347b68b7c9df4b68267ed426f900d3, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:16:57,150 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 93117f2f0eac5f4b25372f994df8b5d4, NAME => 'Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:16:57,191 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:57,191 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 09347b68b7c9df4b68267ed426f900d3, disabling compactions & flushes 2023-07-11 18:16:57,191 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3. 2023-07-11 18:16:57,191 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3. 2023-07-11 18:16:57,191 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3. after waiting 0 ms 2023-07-11 18:16:57,191 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3. 2023-07-11 18:16:57,191 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3. 2023-07-11 18:16:57,191 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 09347b68b7c9df4b68267ed426f900d3: 2023-07-11 18:16:57,192 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 2b313e94e3bb9dfe764f7911bb7dba9b, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:16:57,192 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:57,192 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 93117f2f0eac5f4b25372f994df8b5d4, disabling compactions & flushes 2023-07-11 18:16:57,192 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4. 2023-07-11 18:16:57,192 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4. 2023-07-11 18:16:57,192 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4. after waiting 0 ms 2023-07-11 18:16:57,193 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4. 2023-07-11 18:16:57,193 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4. 2023-07-11 18:16:57,193 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 93117f2f0eac5f4b25372f994df8b5d4: 2023-07-11 18:16:57,193 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 45884730e61070d79b55e19fd1e290d5, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:16:57,197 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099417072.4065515675c69107706229ae27cf88f8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:57,197 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 4065515675c69107706229ae27cf88f8, disabling compactions & flushes 2023-07-11 18:16:57,197 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099417072.4065515675c69107706229ae27cf88f8. 2023-07-11 18:16:57,198 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099417072.4065515675c69107706229ae27cf88f8. 2023-07-11 18:16:57,198 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099417072.4065515675c69107706229ae27cf88f8. after waiting 0 ms 2023-07-11 18:16:57,198 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099417072.4065515675c69107706229ae27cf88f8. 2023-07-11 18:16:57,198 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099417072.4065515675c69107706229ae27cf88f8. 2023-07-11 18:16:57,198 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 4065515675c69107706229ae27cf88f8: 2023-07-11 18:16:57,214 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:57,214 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:57,214 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 2b313e94e3bb9dfe764f7911bb7dba9b, disabling compactions & flushes 2023-07-11 18:16:57,214 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 45884730e61070d79b55e19fd1e290d5, disabling compactions & flushes 2023-07-11 18:16:57,214 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b. 2023-07-11 18:16:57,214 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5. 2023-07-11 18:16:57,214 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b. 2023-07-11 18:16:57,214 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5. 2023-07-11 18:16:57,215 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b. after waiting 0 ms 2023-07-11 18:16:57,215 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b. 2023-07-11 18:16:57,215 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5. after waiting 0 ms 2023-07-11 18:16:57,215 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b. 2023-07-11 18:16:57,215 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5. 2023-07-11 18:16:57,215 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 2b313e94e3bb9dfe764f7911bb7dba9b: 2023-07-11 18:16:57,215 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5. 2023-07-11 18:16:57,215 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 45884730e61070d79b55e19fd1e290d5: 2023-07-11 18:16:57,220 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099417219"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099417219"}]},"ts":"1689099417219"} 2023-07-11 18:16:57,220 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099417219"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099417219"}]},"ts":"1689099417219"} 2023-07-11 18:16:57,220 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689099417072.4065515675c69107706229ae27cf88f8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099417219"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099417219"}]},"ts":"1689099417219"} 2023-07-11 18:16:57,220 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099417219"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099417219"}]},"ts":"1689099417219"} 2023-07-11 18:16:57,220 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099417219"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099417219"}]},"ts":"1689099417219"} 2023-07-11 18:16:57,223 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-11 18:16:57,225 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099417225"}]},"ts":"1689099417225"} 2023-07-11 18:16:57,227 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-11 18:16:57,232 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:16:57,232 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:16:57,232 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:16:57,232 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:16:57,233 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=93117f2f0eac5f4b25372f994df8b5d4, ASSIGN}, {pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=09347b68b7c9df4b68267ed426f900d3, ASSIGN}, {pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4065515675c69107706229ae27cf88f8, ASSIGN}, {pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2b313e94e3bb9dfe764f7911bb7dba9b, ASSIGN}, {pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=45884730e61070d79b55e19fd1e290d5, ASSIGN}] 2023-07-11 18:16:57,236 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=09347b68b7c9df4b68267ed426f900d3, ASSIGN 2023-07-11 18:16:57,236 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=93117f2f0eac5f4b25372f994df8b5d4, ASSIGN 2023-07-11 18:16:57,236 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4065515675c69107706229ae27cf88f8, ASSIGN 2023-07-11 18:16:57,236 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=45884730e61070d79b55e19fd1e290d5, ASSIGN 2023-07-11 18:16:57,236 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2b313e94e3bb9dfe764f7911bb7dba9b, ASSIGN 2023-07-11 18:16:57,237 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=09347b68b7c9df4b68267ed426f900d3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37889,1689099411612; forceNewPlan=false, retain=false 2023-07-11 18:16:57,237 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=45884730e61070d79b55e19fd1e290d5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37889,1689099411612; forceNewPlan=false, retain=false 2023-07-11 18:16:57,237 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4065515675c69107706229ae27cf88f8, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37889,1689099411612; forceNewPlan=false, retain=false 2023-07-11 18:16:57,237 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2b313e94e3bb9dfe764f7911bb7dba9b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37773,1689099407650; forceNewPlan=false, retain=false 2023-07-11 18:16:57,237 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=93117f2f0eac5f4b25372f994df8b5d4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37773,1689099407650; forceNewPlan=false, retain=false 2023-07-11 18:16:57,331 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-11 18:16:57,388 INFO [jenkins-hbase4:45397] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-11 18:16:57,394 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=45884730e61070d79b55e19fd1e290d5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:57,395 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099417394"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099417394"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099417394"}]},"ts":"1689099417394"} 2023-07-11 18:16:57,396 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=93117f2f0eac5f4b25372f994df8b5d4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:57,399 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099417396"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099417396"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099417396"}]},"ts":"1689099417396"} 2023-07-11 18:16:57,396 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=2b313e94e3bb9dfe764f7911bb7dba9b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:57,396 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=09347b68b7c9df4b68267ed426f900d3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:57,399 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099417396"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099417396"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099417396"}]},"ts":"1689099417396"} 2023-07-11 18:16:57,400 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099417396"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099417396"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099417396"}]},"ts":"1689099417396"} 2023-07-11 18:16:57,395 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=4065515675c69107706229ae27cf88f8, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:57,400 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689099417072.4065515675c69107706229ae27cf88f8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099417395"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099417395"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099417395"}]},"ts":"1689099417395"} 2023-07-11 18:16:57,414 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE; OpenRegionProcedure 45884730e61070d79b55e19fd1e290d5, server=jenkins-hbase4.apache.org,37889,1689099411612}] 2023-07-11 18:16:57,418 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=56, state=RUNNABLE; OpenRegionProcedure 93117f2f0eac5f4b25372f994df8b5d4, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:16:57,420 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=63, ppid=59, state=RUNNABLE; OpenRegionProcedure 2b313e94e3bb9dfe764f7911bb7dba9b, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:16:57,421 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=57, state=RUNNABLE; OpenRegionProcedure 09347b68b7c9df4b68267ed426f900d3, server=jenkins-hbase4.apache.org,37889,1689099411612}] 2023-07-11 18:16:57,426 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=58, state=RUNNABLE; OpenRegionProcedure 4065515675c69107706229ae27cf88f8, server=jenkins-hbase4.apache.org,37889,1689099411612}] 2023-07-11 18:16:57,573 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099417072.4065515675c69107706229ae27cf88f8. 2023-07-11 18:16:57,573 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4065515675c69107706229ae27cf88f8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099417072.4065515675c69107706229ae27cf88f8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-11 18:16:57,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 4065515675c69107706229ae27cf88f8 2023-07-11 18:16:57,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099417072.4065515675c69107706229ae27cf88f8.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:57,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4065515675c69107706229ae27cf88f8 2023-07-11 18:16:57,574 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4065515675c69107706229ae27cf88f8 2023-07-11 18:16:57,576 INFO [StoreOpener-4065515675c69107706229ae27cf88f8-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4065515675c69107706229ae27cf88f8 2023-07-11 18:16:57,576 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4. 2023-07-11 18:16:57,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 93117f2f0eac5f4b25372f994df8b5d4, NAME => 'Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-11 18:16:57,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 93117f2f0eac5f4b25372f994df8b5d4 2023-07-11 18:16:57,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:57,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 93117f2f0eac5f4b25372f994df8b5d4 2023-07-11 18:16:57,576 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 93117f2f0eac5f4b25372f994df8b5d4 2023-07-11 18:16:57,577 DEBUG [StoreOpener-4065515675c69107706229ae27cf88f8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/4065515675c69107706229ae27cf88f8/f 2023-07-11 18:16:57,577 DEBUG [StoreOpener-4065515675c69107706229ae27cf88f8-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/4065515675c69107706229ae27cf88f8/f 2023-07-11 18:16:57,578 INFO [StoreOpener-93117f2f0eac5f4b25372f994df8b5d4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 93117f2f0eac5f4b25372f994df8b5d4 2023-07-11 18:16:57,578 INFO [StoreOpener-4065515675c69107706229ae27cf88f8-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4065515675c69107706229ae27cf88f8 columnFamilyName f 2023-07-11 18:16:57,579 INFO [StoreOpener-4065515675c69107706229ae27cf88f8-1] regionserver.HStore(310): Store=4065515675c69107706229ae27cf88f8/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:57,579 DEBUG [StoreOpener-93117f2f0eac5f4b25372f994df8b5d4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/93117f2f0eac5f4b25372f994df8b5d4/f 2023-07-11 18:16:57,579 DEBUG [StoreOpener-93117f2f0eac5f4b25372f994df8b5d4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/93117f2f0eac5f4b25372f994df8b5d4/f 2023-07-11 18:16:57,580 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/4065515675c69107706229ae27cf88f8 2023-07-11 18:16:57,580 INFO [StoreOpener-93117f2f0eac5f4b25372f994df8b5d4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 93117f2f0eac5f4b25372f994df8b5d4 columnFamilyName f 2023-07-11 18:16:57,580 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/4065515675c69107706229ae27cf88f8 2023-07-11 18:16:57,580 INFO [StoreOpener-93117f2f0eac5f4b25372f994df8b5d4-1] regionserver.HStore(310): Store=93117f2f0eac5f4b25372f994df8b5d4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:57,582 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/93117f2f0eac5f4b25372f994df8b5d4 2023-07-11 18:16:57,582 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/93117f2f0eac5f4b25372f994df8b5d4 2023-07-11 18:16:57,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4065515675c69107706229ae27cf88f8 2023-07-11 18:16:57,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 93117f2f0eac5f4b25372f994df8b5d4 2023-07-11 18:16:57,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/4065515675c69107706229ae27cf88f8/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:16:57,587 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4065515675c69107706229ae27cf88f8; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9601601440, jitterRate=-0.1057811826467514}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:16:57,587 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4065515675c69107706229ae27cf88f8: 2023-07-11 18:16:57,588 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099417072.4065515675c69107706229ae27cf88f8., pid=65, masterSystemTime=1689099417569 2023-07-11 18:16:57,588 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/93117f2f0eac5f4b25372f994df8b5d4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:16:57,589 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 93117f2f0eac5f4b25372f994df8b5d4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9703591680, jitterRate=-0.09628260135650635}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:16:57,589 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 93117f2f0eac5f4b25372f994df8b5d4: 2023-07-11 18:16:57,590 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4., pid=62, masterSystemTime=1689099417572 2023-07-11 18:16:57,590 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099417072.4065515675c69107706229ae27cf88f8. 2023-07-11 18:16:57,590 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099417072.4065515675c69107706229ae27cf88f8. 2023-07-11 18:16:57,590 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5. 2023-07-11 18:16:57,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 45884730e61070d79b55e19fd1e290d5, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-11 18:16:57,591 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=4065515675c69107706229ae27cf88f8, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:57,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 45884730e61070d79b55e19fd1e290d5 2023-07-11 18:16:57,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:57,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 45884730e61070d79b55e19fd1e290d5 2023-07-11 18:16:57,591 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689099417072.4065515675c69107706229ae27cf88f8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099417591"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099417591"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099417591"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099417591"}]},"ts":"1689099417591"} 2023-07-11 18:16:57,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 45884730e61070d79b55e19fd1e290d5 2023-07-11 18:16:57,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4. 2023-07-11 18:16:57,592 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4. 2023-07-11 18:16:57,592 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b. 2023-07-11 18:16:57,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2b313e94e3bb9dfe764f7911bb7dba9b, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-11 18:16:57,592 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=93117f2f0eac5f4b25372f994df8b5d4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:57,593 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099417592"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099417592"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099417592"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099417592"}]},"ts":"1689099417592"} 2023-07-11 18:16:57,593 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2b313e94e3bb9dfe764f7911bb7dba9b 2023-07-11 18:16:57,593 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:57,593 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2b313e94e3bb9dfe764f7911bb7dba9b 2023-07-11 18:16:57,593 INFO [StoreOpener-45884730e61070d79b55e19fd1e290d5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 45884730e61070d79b55e19fd1e290d5 2023-07-11 18:16:57,593 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2b313e94e3bb9dfe764f7911bb7dba9b 2023-07-11 18:16:57,595 DEBUG [StoreOpener-45884730e61070d79b55e19fd1e290d5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/45884730e61070d79b55e19fd1e290d5/f 2023-07-11 18:16:57,595 DEBUG [StoreOpener-45884730e61070d79b55e19fd1e290d5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/45884730e61070d79b55e19fd1e290d5/f 2023-07-11 18:16:57,596 INFO [StoreOpener-2b313e94e3bb9dfe764f7911bb7dba9b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2b313e94e3bb9dfe764f7911bb7dba9b 2023-07-11 18:16:57,596 INFO [StoreOpener-45884730e61070d79b55e19fd1e290d5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 45884730e61070d79b55e19fd1e290d5 columnFamilyName f 2023-07-11 18:16:57,597 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=58 2023-07-11 18:16:57,597 INFO [StoreOpener-45884730e61070d79b55e19fd1e290d5-1] regionserver.HStore(310): Store=45884730e61070d79b55e19fd1e290d5/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:57,597 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=58, state=SUCCESS; OpenRegionProcedure 4065515675c69107706229ae27cf88f8, server=jenkins-hbase4.apache.org,37889,1689099411612 in 171 msec 2023-07-11 18:16:57,598 DEBUG [StoreOpener-2b313e94e3bb9dfe764f7911bb7dba9b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/2b313e94e3bb9dfe764f7911bb7dba9b/f 2023-07-11 18:16:57,598 DEBUG [StoreOpener-2b313e94e3bb9dfe764f7911bb7dba9b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/2b313e94e3bb9dfe764f7911bb7dba9b/f 2023-07-11 18:16:57,599 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=56 2023-07-11 18:16:57,599 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=56, state=SUCCESS; OpenRegionProcedure 93117f2f0eac5f4b25372f994df8b5d4, server=jenkins-hbase4.apache.org,37773,1689099407650 in 177 msec 2023-07-11 18:16:57,599 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/45884730e61070d79b55e19fd1e290d5 2023-07-11 18:16:57,599 INFO [StoreOpener-2b313e94e3bb9dfe764f7911bb7dba9b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2b313e94e3bb9dfe764f7911bb7dba9b columnFamilyName f 2023-07-11 18:16:57,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/45884730e61070d79b55e19fd1e290d5 2023-07-11 18:16:57,600 INFO [StoreOpener-2b313e94e3bb9dfe764f7911bb7dba9b-1] regionserver.HStore(310): Store=2b313e94e3bb9dfe764f7911bb7dba9b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:57,602 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/2b313e94e3bb9dfe764f7911bb7dba9b 2023-07-11 18:16:57,602 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/2b313e94e3bb9dfe764f7911bb7dba9b 2023-07-11 18:16:57,603 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4065515675c69107706229ae27cf88f8, ASSIGN in 365 msec 2023-07-11 18:16:57,603 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 45884730e61070d79b55e19fd1e290d5 2023-07-11 18:16:57,605 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=93117f2f0eac5f4b25372f994df8b5d4, ASSIGN in 367 msec 2023-07-11 18:16:57,607 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/45884730e61070d79b55e19fd1e290d5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:16:57,607 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2b313e94e3bb9dfe764f7911bb7dba9b 2023-07-11 18:16:57,607 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 45884730e61070d79b55e19fd1e290d5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11976664800, jitterRate=0.11541382968425751}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:16:57,607 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 45884730e61070d79b55e19fd1e290d5: 2023-07-11 18:16:57,608 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5., pid=61, masterSystemTime=1689099417569 2023-07-11 18:16:57,610 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/2b313e94e3bb9dfe764f7911bb7dba9b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:16:57,610 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5. 2023-07-11 18:16:57,610 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5. 2023-07-11 18:16:57,610 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3. 2023-07-11 18:16:57,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 09347b68b7c9df4b68267ed426f900d3, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-11 18:16:57,611 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2b313e94e3bb9dfe764f7911bb7dba9b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11851858720, jitterRate=0.10379035770893097}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:16:57,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 09347b68b7c9df4b68267ed426f900d3 2023-07-11 18:16:57,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:57,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2b313e94e3bb9dfe764f7911bb7dba9b: 2023-07-11 18:16:57,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 09347b68b7c9df4b68267ed426f900d3 2023-07-11 18:16:57,611 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 09347b68b7c9df4b68267ed426f900d3 2023-07-11 18:16:57,611 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=45884730e61070d79b55e19fd1e290d5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:57,612 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099417611"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099417611"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099417611"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099417611"}]},"ts":"1689099417611"} 2023-07-11 18:16:57,612 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b., pid=63, masterSystemTime=1689099417572 2023-07-11 18:16:57,614 INFO [StoreOpener-09347b68b7c9df4b68267ed426f900d3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 09347b68b7c9df4b68267ed426f900d3 2023-07-11 18:16:57,616 DEBUG [StoreOpener-09347b68b7c9df4b68267ed426f900d3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/09347b68b7c9df4b68267ed426f900d3/f 2023-07-11 18:16:57,616 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=2b313e94e3bb9dfe764f7911bb7dba9b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:57,616 DEBUG [StoreOpener-09347b68b7c9df4b68267ed426f900d3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/09347b68b7c9df4b68267ed426f900d3/f 2023-07-11 18:16:57,616 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099417616"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099417616"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099417616"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099417616"}]},"ts":"1689099417616"} 2023-07-11 18:16:57,617 INFO [StoreOpener-09347b68b7c9df4b68267ed426f900d3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 09347b68b7c9df4b68267ed426f900d3 columnFamilyName f 2023-07-11 18:16:57,617 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b. 2023-07-11 18:16:57,617 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b. 2023-07-11 18:16:57,618 INFO [StoreOpener-09347b68b7c9df4b68267ed426f900d3-1] regionserver.HStore(310): Store=09347b68b7c9df4b68267ed426f900d3/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:57,619 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=60 2023-07-11 18:16:57,619 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; OpenRegionProcedure 45884730e61070d79b55e19fd1e290d5, server=jenkins-hbase4.apache.org,37889,1689099411612 in 200 msec 2023-07-11 18:16:57,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/09347b68b7c9df4b68267ed426f900d3 2023-07-11 18:16:57,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/09347b68b7c9df4b68267ed426f900d3 2023-07-11 18:16:57,621 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=45884730e61070d79b55e19fd1e290d5, ASSIGN in 387 msec 2023-07-11 18:16:57,621 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=63, resume processing ppid=59 2023-07-11 18:16:57,622 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=59, state=SUCCESS; OpenRegionProcedure 2b313e94e3bb9dfe764f7911bb7dba9b, server=jenkins-hbase4.apache.org,37773,1689099407650 in 199 msec 2023-07-11 18:16:57,623 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2b313e94e3bb9dfe764f7911bb7dba9b, ASSIGN in 390 msec 2023-07-11 18:16:57,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 09347b68b7c9df4b68267ed426f900d3 2023-07-11 18:16:57,628 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/09347b68b7c9df4b68267ed426f900d3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:16:57,628 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 09347b68b7c9df4b68267ed426f900d3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10868610400, jitterRate=0.01221822202205658}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:16:57,629 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 09347b68b7c9df4b68267ed426f900d3: 2023-07-11 18:16:57,629 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3., pid=64, masterSystemTime=1689099417569 2023-07-11 18:16:57,631 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3. 2023-07-11 18:16:57,631 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3. 2023-07-11 18:16:57,632 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=09347b68b7c9df4b68267ed426f900d3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:57,632 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099417632"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099417632"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099417632"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099417632"}]},"ts":"1689099417632"} 2023-07-11 18:16:57,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-11 18:16:57,637 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=57 2023-07-11 18:16:57,638 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=57, state=SUCCESS; OpenRegionProcedure 09347b68b7c9df4b68267ed426f900d3, server=jenkins-hbase4.apache.org,37889,1689099411612 in 213 msec 2023-07-11 18:16:57,640 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=57, resume processing ppid=55 2023-07-11 18:16:57,640 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=09347b68b7c9df4b68267ed426f900d3, ASSIGN in 406 msec 2023-07-11 18:16:57,640 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099417640"}]},"ts":"1689099417640"} 2023-07-11 18:16:57,642 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-11 18:16:57,644 DEBUG [PEWorker-2] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-11 18:16:57,646 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=55, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 628 msec 2023-07-11 18:16:58,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-11 18:16:58,135 INFO [Listener at localhost/35107] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 55 completed 2023-07-11 18:16:58,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:58,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:16:58,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:58,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:16:58,139 INFO [Listener at localhost/35107] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-11 18:16:58,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-11 18:16:58,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=66, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-11 18:16:58,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-11 18:16:58,146 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099418146"}]},"ts":"1689099418146"} 2023-07-11 18:16:58,150 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-11 18:16:58,153 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-11 18:16:58,160 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=93117f2f0eac5f4b25372f994df8b5d4, UNASSIGN}, {pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=09347b68b7c9df4b68267ed426f900d3, UNASSIGN}, {pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4065515675c69107706229ae27cf88f8, UNASSIGN}, {pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2b313e94e3bb9dfe764f7911bb7dba9b, UNASSIGN}, {pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=45884730e61070d79b55e19fd1e290d5, UNASSIGN}] 2023-07-11 18:16:58,162 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2b313e94e3bb9dfe764f7911bb7dba9b, UNASSIGN 2023-07-11 18:16:58,162 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=09347b68b7c9df4b68267ed426f900d3, UNASSIGN 2023-07-11 18:16:58,163 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=93117f2f0eac5f4b25372f994df8b5d4, UNASSIGN 2023-07-11 18:16:58,163 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=45884730e61070d79b55e19fd1e290d5, UNASSIGN 2023-07-11 18:16:58,163 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4065515675c69107706229ae27cf88f8, UNASSIGN 2023-07-11 18:16:58,167 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=2b313e94e3bb9dfe764f7911bb7dba9b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:58,167 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099418167"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099418167"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099418167"}]},"ts":"1689099418167"} 2023-07-11 18:16:58,168 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=09347b68b7c9df4b68267ed426f900d3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:58,168 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099418168"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099418168"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099418168"}]},"ts":"1689099418168"} 2023-07-11 18:16:58,168 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=93117f2f0eac5f4b25372f994df8b5d4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:16:58,169 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099418168"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099418168"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099418168"}]},"ts":"1689099418168"} 2023-07-11 18:16:58,169 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=45884730e61070d79b55e19fd1e290d5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:58,169 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099418169"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099418169"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099418169"}]},"ts":"1689099418169"} 2023-07-11 18:16:58,169 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=4065515675c69107706229ae27cf88f8, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:16:58,169 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=70, state=RUNNABLE; CloseRegionProcedure 2b313e94e3bb9dfe764f7911bb7dba9b, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:16:58,169 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689099417072.4065515675c69107706229ae27cf88f8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099418169"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099418169"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099418169"}]},"ts":"1689099418169"} 2023-07-11 18:16:58,171 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=68, state=RUNNABLE; CloseRegionProcedure 09347b68b7c9df4b68267ed426f900d3, server=jenkins-hbase4.apache.org,37889,1689099411612}] 2023-07-11 18:16:58,174 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=67, state=RUNNABLE; CloseRegionProcedure 93117f2f0eac5f4b25372f994df8b5d4, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:16:58,177 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=71, state=RUNNABLE; CloseRegionProcedure 45884730e61070d79b55e19fd1e290d5, server=jenkins-hbase4.apache.org,37889,1689099411612}] 2023-07-11 18:16:58,178 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=69, state=RUNNABLE; CloseRegionProcedure 4065515675c69107706229ae27cf88f8, server=jenkins-hbase4.apache.org,37889,1689099411612}] 2023-07-11 18:16:58,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-11 18:16:58,325 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 93117f2f0eac5f4b25372f994df8b5d4 2023-07-11 18:16:58,327 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 93117f2f0eac5f4b25372f994df8b5d4, disabling compactions & flushes 2023-07-11 18:16:58,327 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4. 2023-07-11 18:16:58,327 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4. 2023-07-11 18:16:58,327 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4. after waiting 0 ms 2023-07-11 18:16:58,327 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4. 2023-07-11 18:16:58,328 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 09347b68b7c9df4b68267ed426f900d3 2023-07-11 18:16:58,329 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 09347b68b7c9df4b68267ed426f900d3, disabling compactions & flushes 2023-07-11 18:16:58,329 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3. 2023-07-11 18:16:58,329 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3. 2023-07-11 18:16:58,329 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3. after waiting 0 ms 2023-07-11 18:16:58,329 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3. 2023-07-11 18:16:58,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/93117f2f0eac5f4b25372f994df8b5d4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:16:58,338 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4. 2023-07-11 18:16:58,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 93117f2f0eac5f4b25372f994df8b5d4: 2023-07-11 18:16:58,339 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/09347b68b7c9df4b68267ed426f900d3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:16:58,340 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3. 2023-07-11 18:16:58,340 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 09347b68b7c9df4b68267ed426f900d3: 2023-07-11 18:16:58,342 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=93117f2f0eac5f4b25372f994df8b5d4, regionState=CLOSED 2023-07-11 18:16:58,342 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 93117f2f0eac5f4b25372f994df8b5d4 2023-07-11 18:16:58,342 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2b313e94e3bb9dfe764f7911bb7dba9b 2023-07-11 18:16:58,342 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099418342"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099418342"}]},"ts":"1689099418342"} 2023-07-11 18:16:58,343 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2b313e94e3bb9dfe764f7911bb7dba9b, disabling compactions & flushes 2023-07-11 18:16:58,344 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b. 2023-07-11 18:16:58,344 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b. 2023-07-11 18:16:58,344 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b. after waiting 0 ms 2023-07-11 18:16:58,344 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b. 2023-07-11 18:16:58,347 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 09347b68b7c9df4b68267ed426f900d3 2023-07-11 18:16:58,347 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 45884730e61070d79b55e19fd1e290d5 2023-07-11 18:16:58,348 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 45884730e61070d79b55e19fd1e290d5, disabling compactions & flushes 2023-07-11 18:16:58,349 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5. 2023-07-11 18:16:58,349 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5. 2023-07-11 18:16:58,349 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5. after waiting 0 ms 2023-07-11 18:16:58,349 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5. 2023-07-11 18:16:58,349 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=09347b68b7c9df4b68267ed426f900d3, regionState=CLOSED 2023-07-11 18:16:58,350 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099418349"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099418349"}]},"ts":"1689099418349"} 2023-07-11 18:16:58,351 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/2b313e94e3bb9dfe764f7911bb7dba9b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:16:58,353 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b. 2023-07-11 18:16:58,353 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2b313e94e3bb9dfe764f7911bb7dba9b: 2023-07-11 18:16:58,354 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=67 2023-07-11 18:16:58,354 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=67, state=SUCCESS; CloseRegionProcedure 93117f2f0eac5f4b25372f994df8b5d4, server=jenkins-hbase4.apache.org,37773,1689099407650 in 176 msec 2023-07-11 18:16:58,356 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2b313e94e3bb9dfe764f7911bb7dba9b 2023-07-11 18:16:58,357 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=2b313e94e3bb9dfe764f7911bb7dba9b, regionState=CLOSED 2023-07-11 18:16:58,357 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099418357"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099418357"}]},"ts":"1689099418357"} 2023-07-11 18:16:58,357 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=68 2023-07-11 18:16:58,358 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=93117f2f0eac5f4b25372f994df8b5d4, UNASSIGN in 196 msec 2023-07-11 18:16:58,358 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=68, state=SUCCESS; CloseRegionProcedure 09347b68b7c9df4b68267ed426f900d3, server=jenkins-hbase4.apache.org,37889,1689099411612 in 181 msec 2023-07-11 18:16:58,359 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=09347b68b7c9df4b68267ed426f900d3, UNASSIGN in 198 msec 2023-07-11 18:16:58,360 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/45884730e61070d79b55e19fd1e290d5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:16:58,361 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5. 2023-07-11 18:16:58,361 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 45884730e61070d79b55e19fd1e290d5: 2023-07-11 18:16:58,361 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=70 2023-07-11 18:16:58,361 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=70, state=SUCCESS; CloseRegionProcedure 2b313e94e3bb9dfe764f7911bb7dba9b, server=jenkins-hbase4.apache.org,37773,1689099407650 in 190 msec 2023-07-11 18:16:58,363 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 45884730e61070d79b55e19fd1e290d5 2023-07-11 18:16:58,363 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4065515675c69107706229ae27cf88f8 2023-07-11 18:16:58,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4065515675c69107706229ae27cf88f8, disabling compactions & flushes 2023-07-11 18:16:58,365 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099417072.4065515675c69107706229ae27cf88f8. 2023-07-11 18:16:58,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099417072.4065515675c69107706229ae27cf88f8. 2023-07-11 18:16:58,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099417072.4065515675c69107706229ae27cf88f8. after waiting 0 ms 2023-07-11 18:16:58,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099417072.4065515675c69107706229ae27cf88f8. 2023-07-11 18:16:58,365 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2b313e94e3bb9dfe764f7911bb7dba9b, UNASSIGN in 201 msec 2023-07-11 18:16:58,370 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testTableMoveTruncateAndDrop/4065515675c69107706229ae27cf88f8/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:16:58,371 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=45884730e61070d79b55e19fd1e290d5, regionState=CLOSED 2023-07-11 18:16:58,371 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689099418371"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099418371"}]},"ts":"1689099418371"} 2023-07-11 18:16:58,371 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099417072.4065515675c69107706229ae27cf88f8. 2023-07-11 18:16:58,371 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4065515675c69107706229ae27cf88f8: 2023-07-11 18:16:58,373 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4065515675c69107706229ae27cf88f8 2023-07-11 18:16:58,375 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=4065515675c69107706229ae27cf88f8, regionState=CLOSED 2023-07-11 18:16:58,375 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689099417072.4065515675c69107706229ae27cf88f8.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689099418375"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099418375"}]},"ts":"1689099418375"} 2023-07-11 18:16:58,382 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=71 2023-07-11 18:16:58,382 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=71, state=SUCCESS; CloseRegionProcedure 45884730e61070d79b55e19fd1e290d5, server=jenkins-hbase4.apache.org,37889,1689099411612 in 196 msec 2023-07-11 18:16:58,384 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=69 2023-07-11 18:16:58,384 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=69, state=SUCCESS; CloseRegionProcedure 4065515675c69107706229ae27cf88f8, server=jenkins-hbase4.apache.org,37889,1689099411612 in 201 msec 2023-07-11 18:16:58,384 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=45884730e61070d79b55e19fd1e290d5, UNASSIGN in 222 msec 2023-07-11 18:16:58,386 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=66 2023-07-11 18:16:58,386 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4065515675c69107706229ae27cf88f8, UNASSIGN in 224 msec 2023-07-11 18:16:58,387 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099418387"}]},"ts":"1689099418387"} 2023-07-11 18:16:58,388 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-11 18:16:58,390 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-11 18:16:58,393 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=66, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 252 msec 2023-07-11 18:16:58,448 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-11 18:16:58,448 INFO [Listener at localhost/35107] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 66 completed 2023-07-11 18:16:58,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-11 18:16:58,462 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-11 18:16:58,467 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-11 18:16:58,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_947841941' 2023-07-11 18:16:58,469 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=77, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-11 18:16:58,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:58,472 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:16:58,473 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:58,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:16:58,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-11 18:16:58,485 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/93117f2f0eac5f4b25372f994df8b5d4 2023-07-11 18:16:58,485 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/45884730e61070d79b55e19fd1e290d5 2023-07-11 18:16:58,485 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2b313e94e3bb9dfe764f7911bb7dba9b 2023-07-11 18:16:58,485 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4065515675c69107706229ae27cf88f8 2023-07-11 18:16:58,485 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/09347b68b7c9df4b68267ed426f900d3 2023-07-11 18:16:58,488 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/45884730e61070d79b55e19fd1e290d5/f, FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/45884730e61070d79b55e19fd1e290d5/recovered.edits] 2023-07-11 18:16:58,489 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2b313e94e3bb9dfe764f7911bb7dba9b/f, FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2b313e94e3bb9dfe764f7911bb7dba9b/recovered.edits] 2023-07-11 18:16:58,489 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4065515675c69107706229ae27cf88f8/f, FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4065515675c69107706229ae27cf88f8/recovered.edits] 2023-07-11 18:16:58,489 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/93117f2f0eac5f4b25372f994df8b5d4/f, FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/93117f2f0eac5f4b25372f994df8b5d4/recovered.edits] 2023-07-11 18:16:58,491 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/09347b68b7c9df4b68267ed426f900d3/f, FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/09347b68b7c9df4b68267ed426f900d3/recovered.edits] 2023-07-11 18:16:58,503 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/45884730e61070d79b55e19fd1e290d5/recovered.edits/4.seqid to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/archive/data/default/Group_testTableMoveTruncateAndDrop/45884730e61070d79b55e19fd1e290d5/recovered.edits/4.seqid 2023-07-11 18:16:58,504 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/45884730e61070d79b55e19fd1e290d5 2023-07-11 18:16:58,505 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/09347b68b7c9df4b68267ed426f900d3/recovered.edits/4.seqid to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/archive/data/default/Group_testTableMoveTruncateAndDrop/09347b68b7c9df4b68267ed426f900d3/recovered.edits/4.seqid 2023-07-11 18:16:58,505 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4065515675c69107706229ae27cf88f8/recovered.edits/4.seqid to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/archive/data/default/Group_testTableMoveTruncateAndDrop/4065515675c69107706229ae27cf88f8/recovered.edits/4.seqid 2023-07-11 18:16:58,506 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2b313e94e3bb9dfe764f7911bb7dba9b/recovered.edits/4.seqid to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/archive/data/default/Group_testTableMoveTruncateAndDrop/2b313e94e3bb9dfe764f7911bb7dba9b/recovered.edits/4.seqid 2023-07-11 18:16:58,507 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/93117f2f0eac5f4b25372f994df8b5d4/recovered.edits/4.seqid to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/archive/data/default/Group_testTableMoveTruncateAndDrop/93117f2f0eac5f4b25372f994df8b5d4/recovered.edits/4.seqid 2023-07-11 18:16:58,508 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2b313e94e3bb9dfe764f7911bb7dba9b 2023-07-11 18:16:58,508 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/09347b68b7c9df4b68267ed426f900d3 2023-07-11 18:16:58,508 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/93117f2f0eac5f4b25372f994df8b5d4 2023-07-11 18:16:58,508 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4065515675c69107706229ae27cf88f8 2023-07-11 18:16:58,508 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-11 18:16:58,512 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=77, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-11 18:16:58,519 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-11 18:16:58,522 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-11 18:16:58,524 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=77, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-11 18:16:58,524 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-11 18:16:58,524 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099418524"}]},"ts":"9223372036854775807"} 2023-07-11 18:16:58,524 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099418524"}]},"ts":"9223372036854775807"} 2023-07-11 18:16:58,524 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689099417072.4065515675c69107706229ae27cf88f8.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099418524"}]},"ts":"9223372036854775807"} 2023-07-11 18:16:58,524 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099418524"}]},"ts":"9223372036854775807"} 2023-07-11 18:16:58,524 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099418524"}]},"ts":"9223372036854775807"} 2023-07-11 18:16:58,527 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-11 18:16:58,527 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 93117f2f0eac5f4b25372f994df8b5d4, NAME => 'Group_testTableMoveTruncateAndDrop,,1689099417072.93117f2f0eac5f4b25372f994df8b5d4.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 09347b68b7c9df4b68267ed426f900d3, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689099417072.09347b68b7c9df4b68267ed426f900d3.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 4065515675c69107706229ae27cf88f8, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689099417072.4065515675c69107706229ae27cf88f8.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 2b313e94e3bb9dfe764f7911bb7dba9b, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689099417072.2b313e94e3bb9dfe764f7911bb7dba9b.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 45884730e61070d79b55e19fd1e290d5, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689099417072.45884730e61070d79b55e19fd1e290d5.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-11 18:16:58,527 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-11 18:16:58,527 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689099418527"}]},"ts":"9223372036854775807"} 2023-07-11 18:16:58,529 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-11 18:16:58,535 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=77, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-11 18:16:58,537 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=77, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 79 msec 2023-07-11 18:16:58,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-11 18:16:58,586 INFO [Listener at localhost/35107] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 77 completed 2023-07-11 18:16:58,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:58,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:16:58,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:58,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:58,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:16:58,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:16:58,594 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:16:58,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:16:58,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:16:58,600 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:16:58,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:58,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:58,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-11 18:16:58,608 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:16:58,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:16:58,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:16:58,609 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:16:58,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:37773] to rsgroup default 2023-07-11 18:16:58,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:58,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:58,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:16:58,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_947841941, current retry=0 2023-07-11 18:16:58,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37773,1689099407650, jenkins-hbase4.apache.org,37889,1689099411612] are moved back to Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:58,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_947841941 => default 2023-07-11 18:16:58,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:16:58,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_947841941 2023-07-11 18:16:58,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:58,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:16:58,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:16:58,635 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:16:58,636 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:16:58,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:58,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:16:58,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:16:58,642 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:16:58,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:58,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:58,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45397] to rsgroup master 2023-07-11 18:16:58,649 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:16:58,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 147 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57346 deadline: 1689100618649, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. 2023-07-11 18:16:58,650 WARN [Listener at localhost/35107] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:16:58,653 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:16:58,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:58,657 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:58,658 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37773, jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:45821], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:16:58,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:16:58,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:16:58,690 INFO [Listener at localhost/35107] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=503 (was 422) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582-prefix:jenkins-hbase4.apache.org,45471,1689099407428.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:40365 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582-prefix:jenkins-hbase4.apache.org,37889,1689099411612 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1586470521_17 at /127.0.0.1:49880 [Receiving block BP-629278552-172.31.14.131-1689099401665:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1586470521_17 at /127.0.0.1:57584 [Receiving block BP-629278552-172.31.14.131-1689099401665:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37889 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-176627252_17 at /127.0.0.1:49850 [Receiving block BP-629278552-172.31.14.131-1689099401665:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:37889Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37889 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1842482246-636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (480839201) connection to localhost/127.0.0.1:40365 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37889 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-629278552-172.31.14.131-1689099401665:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-176627252_17 at /127.0.0.1:57536 [Receiving block BP-629278552-172.31.14.131-1689099401665:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2dc08bfd-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1842482246-634 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2dc08bfd-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37889 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase4:37889 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-629278552-172.31.14.131-1689099401665:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37889 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-629278552-172.31.14.131-1689099401665:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1842482246-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-629278552-172.31.14.131-1689099401665:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37889 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-175733140_17 at /127.0.0.1:38568 [Waiting for operation #11] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1842482246-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-176627252_17 at /127.0.0.1:49898 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58592@0x7bf55d29 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/878617475.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58592@0x7bf55d29-SendThread(127.0.0.1:58592) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1842482246-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2dc08bfd-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-176627252_17 at /127.0.0.1:38580 [Receiving block BP-629278552-172.31.14.131-1689099401665:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-3-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1842482246-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37889 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1586470521_17 at /127.0.0.1:38614 [Receiving block BP-629278552-172.31.14.131-1689099401665:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-780935ef-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2dc08bfd-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37889 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1842482246-637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-629278552-172.31.14.131-1689099401665:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37889 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase4:37889-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-175733140_17 at /127.0.0.1:57560 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2dc08bfd-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58592@0x7bf55d29-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37889 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-629278552-172.31.14.131-1689099401665:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2dc08bfd-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x51dbb1fe-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1842482246-635-acceptor-0@2320545b-ServerConnector@2fbb41cf{HTTP/1.1, (http/1.1)}{0.0.0.0:40607} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x51dbb1fe-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=821 (was 679) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=582 (was 624), ProcessCount=172 (was 172), AvailableMemoryMB=2615 (was 3098) 2023-07-11 18:16:58,691 WARN [Listener at localhost/35107] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-11 18:16:58,712 INFO [Listener at localhost/35107] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=503, OpenFileDescriptor=821, MaxFileDescriptor=60000, SystemLoadAverage=582, ProcessCount=172, AvailableMemoryMB=2613 2023-07-11 18:16:58,713 WARN [Listener at localhost/35107] hbase.ResourceChecker(130): Thread=503 is superior to 500 2023-07-11 18:16:58,713 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-11 18:16:58,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:58,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:58,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:16:58,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:16:58,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:16:58,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:16:58,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:16:58,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:16:58,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:58,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:16:58,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:16:58,740 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:16:58,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:16:58,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:58,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:16:58,751 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:16:58,753 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:16:58,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:58,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:58,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45397] to rsgroup master 2023-07-11 18:16:58,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:16:58,766 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 175 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57346 deadline: 1689100618766, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. 2023-07-11 18:16:58,766 WARN [Listener at localhost/35107] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:16:58,771 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:16:58,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:58,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:58,783 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37773, jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:45821], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:16:58,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:16:58,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:16:58,786 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-11 18:16:58,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:16:58,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 181 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:57346 deadline: 1689100618786, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-11 18:16:58,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-11 18:16:58,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:16:58,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 183 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:57346 deadline: 1689100618787, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-11 18:16:58,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-11 18:16:58,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:16:58,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:57346 deadline: 1689100618788, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-11 18:16:58,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-11 18:16:58,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-11 18:16:58,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:58,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:16:58,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:16:58,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:16:58,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:58,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:58,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:58,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:58,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:16:58,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:16:58,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:16:58,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:16:58,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:16:58,826 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-11 18:16:58,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:58,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:16:58,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-11 18:16:58,834 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:16:58,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:16:58,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:16:58,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:16:58,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:16:58,837 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:16:58,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:16:58,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:58,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:16:58,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:16:58,853 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:16:58,854 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:16:58,857 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:58,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:16:58,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:16:58,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:16:58,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:58,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:58,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45397] to rsgroup master 2023-07-11 18:16:58,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:16:58,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 219 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57346 deadline: 1689100618877, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. 2023-07-11 18:16:58,878 WARN [Listener at localhost/35107] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:16:58,880 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:16:58,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:58,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:58,882 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37773, jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:45821], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:16:58,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:16:58,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:16:58,904 INFO [Listener at localhost/35107] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=506 (was 503) Potentially hanging thread: hconnection-0x51dbb1fe-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x51dbb1fe-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x51dbb1fe-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=821 (was 821), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=582 (was 582), ProcessCount=172 (was 172), AvailableMemoryMB=2596 (was 2613) 2023-07-11 18:16:58,904 WARN [Listener at localhost/35107] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-11 18:16:58,927 INFO [Listener at localhost/35107] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=506, OpenFileDescriptor=821, MaxFileDescriptor=60000, SystemLoadAverage=582, ProcessCount=172, AvailableMemoryMB=2594 2023-07-11 18:16:58,927 WARN [Listener at localhost/35107] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-11 18:16:58,927 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-11 18:16:58,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:58,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:58,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:16:58,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:16:58,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:16:58,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:16:58,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:16:58,940 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:16:58,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:58,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:16:58,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:16:58,954 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:16:58,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:16:58,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:58,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:16:58,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:16:58,963 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:16:58,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:58,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:58,975 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45397] to rsgroup master 2023-07-11 18:16:58,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:16:58,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 247 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57346 deadline: 1689100618975, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. 2023-07-11 18:16:58,976 WARN [Listener at localhost/35107] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:16:58,978 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:16:58,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:58,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:58,984 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37773, jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:45821], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:16:58,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:16:58,985 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:16:58,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:58,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:58,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:16:58,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:16:58,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-11 18:16:58,992 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:58,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-11 18:16:58,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:16:58,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:16:58,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:16:59,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:16:59,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:16:59,006 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:37773] to rsgroup bar 2023-07-11 18:16:59,008 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:16:59,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-11 18:16:59,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:16:59,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:16:59,012 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(238): Moving server region b41d0021b1b281d3ab8046d2e4311514, which do not belong to RSGroup bar 2023-07-11 18:16:59,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=b41d0021b1b281d3ab8046d2e4311514, REOPEN/MOVE 2023-07-11 18:16:59,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup bar 2023-07-11 18:16:59,019 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=b41d0021b1b281d3ab8046d2e4311514, REOPEN/MOVE 2023-07-11 18:16:59,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=79, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-11 18:16:59,020 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=b41d0021b1b281d3ab8046d2e4311514, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:59,021 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=79, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-11 18:16:59,022 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689099419020"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099419020"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099419020"}]},"ts":"1689099419020"} 2023-07-11 18:16:59,023 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45471,1689099407428, state=CLOSING 2023-07-11 18:16:59,024 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=78, state=RUNNABLE; CloseRegionProcedure b41d0021b1b281d3ab8046d2e4311514, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:16:59,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-11 18:16:59,026 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-11 18:16:59,026 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=81, ppid=79, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:16:59,026 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 18:16:59,178 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:59,178 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-11 18:16:59,179 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b41d0021b1b281d3ab8046d2e4311514, disabling compactions & flushes 2023-07-11 18:16:59,180 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-11 18:16:59,180 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:16:59,180 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-11 18:16:59,180 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:16:59,181 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-11 18:16:59,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. after waiting 0 ms 2023-07-11 18:16:59,181 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-11 18:16:59,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:16:59,181 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-11 18:16:59,181 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing b41d0021b1b281d3ab8046d2e4311514 1/1 column families, dataSize=5.02 KB heapSize=8.36 KB 2023-07-11 18:16:59,181 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=39.10 KB heapSize=60.12 KB 2023-07-11 18:16:59,240 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.02 KB at sequenceid=32 (bloomFilter=true), to=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/.tmp/m/1c5072fc590941feac748aed4c580d54 2023-07-11 18:16:59,243 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=36.21 KB at sequenceid=101 (bloomFilter=false), to=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/.tmp/info/32f3bb951abc4caeabb181e04b991af8 2023-07-11 18:16:59,269 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1c5072fc590941feac748aed4c580d54 2023-07-11 18:16:59,270 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/.tmp/m/1c5072fc590941feac748aed4c580d54 as hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/m/1c5072fc590941feac748aed4c580d54 2023-07-11 18:16:59,271 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 32f3bb951abc4caeabb181e04b991af8 2023-07-11 18:16:59,281 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1c5072fc590941feac748aed4c580d54 2023-07-11 18:16:59,281 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/m/1c5072fc590941feac748aed4c580d54, entries=9, sequenceid=32, filesize=5.5 K 2023-07-11 18:16:59,282 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.02 KB/5140, heapSize ~8.34 KB/8544, currentSize=0 B/0 for b41d0021b1b281d3ab8046d2e4311514 in 101ms, sequenceid=32, compaction requested=false 2023-07-11 18:16:59,295 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/recovered.edits/35.seqid, newMaxSeqId=35, maxSeqId=12 2023-07-11 18:16:59,296 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.15 KB at sequenceid=101 (bloomFilter=false), to=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/.tmp/rep_barrier/1533f513943149d8b12f7e4c4c2821c6 2023-07-11 18:16:59,296 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 18:16:59,297 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:16:59,297 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b41d0021b1b281d3ab8046d2e4311514: 2023-07-11 18:16:59,297 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b41d0021b1b281d3ab8046d2e4311514 move to jenkins-hbase4.apache.org,45821,1689099407865 record at close sequenceid=32 2023-07-11 18:16:59,298 DEBUG [PEWorker-4] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=80, ppid=78, state=RUNNABLE; CloseRegionProcedure b41d0021b1b281d3ab8046d2e4311514, server=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:16:59,299 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:16:59,303 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1533f513943149d8b12f7e4c4c2821c6 2023-07-11 18:16:59,320 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.73 KB at sequenceid=101 (bloomFilter=false), to=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/.tmp/table/2e3c731af565403eb75fd6528ed8fe34 2023-07-11 18:16:59,326 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2e3c731af565403eb75fd6528ed8fe34 2023-07-11 18:16:59,327 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/.tmp/info/32f3bb951abc4caeabb181e04b991af8 as hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/info/32f3bb951abc4caeabb181e04b991af8 2023-07-11 18:16:59,334 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 32f3bb951abc4caeabb181e04b991af8 2023-07-11 18:16:59,334 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/info/32f3bb951abc4caeabb181e04b991af8, entries=31, sequenceid=101, filesize=8.4 K 2023-07-11 18:16:59,335 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/.tmp/rep_barrier/1533f513943149d8b12f7e4c4c2821c6 as hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/rep_barrier/1533f513943149d8b12f7e4c4c2821c6 2023-07-11 18:16:59,342 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1533f513943149d8b12f7e4c4c2821c6 2023-07-11 18:16:59,342 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/rep_barrier/1533f513943149d8b12f7e4c4c2821c6, entries=10, sequenceid=101, filesize=6.1 K 2023-07-11 18:16:59,343 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/.tmp/table/2e3c731af565403eb75fd6528ed8fe34 as hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/table/2e3c731af565403eb75fd6528ed8fe34 2023-07-11 18:16:59,349 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2e3c731af565403eb75fd6528ed8fe34 2023-07-11 18:16:59,349 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/table/2e3c731af565403eb75fd6528ed8fe34, entries=11, sequenceid=101, filesize=6.0 K 2023-07-11 18:16:59,350 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~39.10 KB/40037, heapSize ~60.07 KB/61512, currentSize=0 B/0 for 1588230740 in 169ms, sequenceid=101, compaction requested=false 2023-07-11 18:16:59,367 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=18 2023-07-11 18:16:59,368 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 18:16:59,369 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-11 18:16:59,369 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-11 18:16:59,369 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,45821,1689099407865 record at close sequenceid=101 2023-07-11 18:16:59,371 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-11 18:16:59,372 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-11 18:16:59,374 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=81, resume processing ppid=79 2023-07-11 18:16:59,374 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=81, ppid=79, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45471,1689099407428 in 346 msec 2023-07-11 18:16:59,375 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=79, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45821,1689099407865; forceNewPlan=false, retain=false 2023-07-11 18:16:59,526 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45821,1689099407865, state=OPENING 2023-07-11 18:16:59,528 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-11 18:16:59,528 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=79, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:16:59,528 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 18:16:59,685 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-11 18:16:59,685 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 18:16:59,687 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45821%2C1689099407865.meta, suffix=.meta, logDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/WALs/jenkins-hbase4.apache.org,45821,1689099407865, archiveDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/oldWALs, maxLogs=32 2023-07-11 18:16:59,703 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41511,DS-8dff4402-f4a4-4098-b391-d4e5069af3ae,DISK] 2023-07-11 18:16:59,704 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33363,DS-4c181186-22ac-44a8-b1a5-3c334dc774a7,DISK] 2023-07-11 18:16:59,704 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39467,DS-16d7a147-10c7-4220-b527-dbfb950941dd,DISK] 2023-07-11 18:16:59,707 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/WALs/jenkins-hbase4.apache.org,45821,1689099407865/jenkins-hbase4.apache.org%2C45821%2C1689099407865.meta.1689099419688.meta 2023-07-11 18:16:59,708 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41511,DS-8dff4402-f4a4-4098-b391-d4e5069af3ae,DISK], DatanodeInfoWithStorage[127.0.0.1:39467,DS-16d7a147-10c7-4220-b527-dbfb950941dd,DISK], DatanodeInfoWithStorage[127.0.0.1:33363,DS-4c181186-22ac-44a8-b1a5-3c334dc774a7,DISK]] 2023-07-11 18:16:59,708 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:16:59,708 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-11 18:16:59,708 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-11 18:16:59,708 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-11 18:16:59,709 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-11 18:16:59,709 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:16:59,709 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-11 18:16:59,709 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-11 18:16:59,711 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-11 18:16:59,712 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/info 2023-07-11 18:16:59,712 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/info 2023-07-11 18:16:59,712 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-11 18:16:59,725 INFO [StoreFileOpener-info-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 32f3bb951abc4caeabb181e04b991af8 2023-07-11 18:16:59,725 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/info/32f3bb951abc4caeabb181e04b991af8 2023-07-11 18:16:59,732 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/info/8daf5d36959849338dfd08a93877c482 2023-07-11 18:16:59,732 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:59,733 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-11 18:16:59,734 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/rep_barrier 2023-07-11 18:16:59,734 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/rep_barrier 2023-07-11 18:16:59,735 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-11 18:16:59,743 INFO [StoreFileOpener-rep_barrier-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1533f513943149d8b12f7e4c4c2821c6 2023-07-11 18:16:59,743 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/rep_barrier/1533f513943149d8b12f7e4c4c2821c6 2023-07-11 18:16:59,743 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:59,743 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-11 18:16:59,744 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/table 2023-07-11 18:16:59,744 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/table 2023-07-11 18:16:59,745 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-11 18:16:59,753 INFO [StoreFileOpener-table-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2e3c731af565403eb75fd6528ed8fe34 2023-07-11 18:16:59,754 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/table/2e3c731af565403eb75fd6528ed8fe34 2023-07-11 18:16:59,761 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/table/d80457fa51ad47e7a3cc1d97e990f7ee 2023-07-11 18:16:59,761 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:16:59,762 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740 2023-07-11 18:16:59,763 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740 2023-07-11 18:16:59,766 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-11 18:16:59,768 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-11 18:16:59,769 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=105; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10433936640, jitterRate=-0.02826392650604248}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-11 18:16:59,769 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-11 18:16:59,770 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=82, masterSystemTime=1689099419680 2023-07-11 18:16:59,772 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-11 18:16:59,772 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-11 18:16:59,772 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45821,1689099407865, state=OPEN 2023-07-11 18:16:59,774 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-11 18:16:59,774 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 18:16:59,775 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=b41d0021b1b281d3ab8046d2e4311514, regionState=CLOSED 2023-07-11 18:16:59,775 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689099419775"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099419775"}]},"ts":"1689099419775"} 2023-07-11 18:16:59,776 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45471] ipc.CallRunner(144): callId: 188 service: ClientService methodName: Mutate size: 214 connection: 172.31.14.131:53268 deadline: 1689099479776, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45821 startCode=1689099407865. As of locationSeqNum=101. 2023-07-11 18:16:59,776 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=79 2023-07-11 18:16:59,777 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=79, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45821,1689099407865 in 246 msec 2023-07-11 18:16:59,779 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=79, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 757 msec 2023-07-11 18:16:59,881 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=78 2023-07-11 18:16:59,881 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=78, state=SUCCESS; CloseRegionProcedure b41d0021b1b281d3ab8046d2e4311514, server=jenkins-hbase4.apache.org,45471,1689099407428 in 855 msec 2023-07-11 18:16:59,882 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=b41d0021b1b281d3ab8046d2e4311514, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45821,1689099407865; forceNewPlan=false, retain=false 2023-07-11 18:17:00,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure.ProcedureSyncWait(216): waitFor pid=78 2023-07-11 18:17:00,033 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=b41d0021b1b281d3ab8046d2e4311514, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:00,033 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689099420033"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099420033"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099420033"}]},"ts":"1689099420033"} 2023-07-11 18:17:00,035 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=78, state=RUNNABLE; OpenRegionProcedure b41d0021b1b281d3ab8046d2e4311514, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:17:00,192 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:17:00,192 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b41d0021b1b281d3ab8046d2e4311514, NAME => 'hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:00,193 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-11 18:17:00,193 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. service=MultiRowMutationService 2023-07-11 18:17:00,193 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-11 18:17:00,193 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:17:00,193 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:00,193 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:17:00,193 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:17:00,195 INFO [StoreOpener-b41d0021b1b281d3ab8046d2e4311514-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:17:00,196 DEBUG [StoreOpener-b41d0021b1b281d3ab8046d2e4311514-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/m 2023-07-11 18:17:00,196 DEBUG [StoreOpener-b41d0021b1b281d3ab8046d2e4311514-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/m 2023-07-11 18:17:00,197 INFO [StoreOpener-b41d0021b1b281d3ab8046d2e4311514-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b41d0021b1b281d3ab8046d2e4311514 columnFamilyName m 2023-07-11 18:17:00,211 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1c5072fc590941feac748aed4c580d54 2023-07-11 18:17:00,211 DEBUG [StoreOpener-b41d0021b1b281d3ab8046d2e4311514-1] regionserver.HStore(539): loaded hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/m/1c5072fc590941feac748aed4c580d54 2023-07-11 18:17:00,219 DEBUG [StoreOpener-b41d0021b1b281d3ab8046d2e4311514-1] regionserver.HStore(539): loaded hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/m/fbb05319683b4455b35806517df25ec8 2023-07-11 18:17:00,219 INFO [StoreOpener-b41d0021b1b281d3ab8046d2e4311514-1] regionserver.HStore(310): Store=b41d0021b1b281d3ab8046d2e4311514/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:00,220 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:17:00,222 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:17:00,225 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:17:00,226 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b41d0021b1b281d3ab8046d2e4311514; next sequenceid=36; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@14f834bc, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:00,226 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b41d0021b1b281d3ab8046d2e4311514: 2023-07-11 18:17:00,227 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514., pid=83, masterSystemTime=1689099420186 2023-07-11 18:17:00,229 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:17:00,229 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:17:00,230 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=b41d0021b1b281d3ab8046d2e4311514, regionState=OPEN, openSeqNum=36, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:00,230 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689099420230"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099420230"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099420230"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099420230"}]},"ts":"1689099420230"} 2023-07-11 18:17:00,243 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=78 2023-07-11 18:17:00,243 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=78, state=SUCCESS; OpenRegionProcedure b41d0021b1b281d3ab8046d2e4311514, server=jenkins-hbase4.apache.org,45821,1689099407865 in 197 msec 2023-07-11 18:17:00,247 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=b41d0021b1b281d3ab8046d2e4311514, REOPEN/MOVE in 1.2310 sec 2023-07-11 18:17:00,594 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-11 18:17:01,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37773,1689099407650, jenkins-hbase4.apache.org,37889,1689099411612, jenkins-hbase4.apache.org,45471,1689099407428] are moved back to default 2023-07-11 18:17:01,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-11 18:17:01,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:01,028 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=45471] ipc.CallRunner(144): callId: 14 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:53276 deadline: 1689099481028, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45821 startCode=1689099407865. As of locationSeqNum=32. 2023-07-11 18:17:01,129 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45471] ipc.CallRunner(144): callId: 15 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:53276 deadline: 1689099481129, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45821 startCode=1689099407865. As of locationSeqNum=101. 2023-07-11 18:17:01,231 DEBUG [hconnection-0x51dbb1fe-shared-pool-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 18:17:01,236 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43594, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 18:17:01,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:01,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:01,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-11 18:17:01,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:01,264 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 18:17:01,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-11 18:17:01,267 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 18:17:01,267 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 84 2023-07-11 18:17:01,268 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=45471] ipc.CallRunner(144): callId: 193 service: ClientService methodName: ExecService size: 528 connection: 172.31.14.131:53268 deadline: 1689099481268, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45821 startCode=1689099407865. As of locationSeqNum=32. 2023-07-11 18:17:01,268 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-11 18:17:01,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-11 18:17:01,373 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:01,373 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-11 18:17:01,374 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:01,374 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:01,384 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 18:17:01,386 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:01,387 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43 empty. 2023-07-11 18:17:01,387 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:01,387 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-11 18:17:01,404 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-11 18:17:01,406 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6bea8348fc35cef12b59ab53d5e74e43, NAME => 'Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:17:01,422 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:01,423 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 6bea8348fc35cef12b59ab53d5e74e43, disabling compactions & flushes 2023-07-11 18:17:01,423 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:01,423 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:01,423 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. after waiting 0 ms 2023-07-11 18:17:01,423 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:01,423 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:01,423 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 6bea8348fc35cef12b59ab53d5e74e43: 2023-07-11 18:17:01,426 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 18:17:01,427 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689099421427"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099421427"}]},"ts":"1689099421427"} 2023-07-11 18:17:01,429 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 18:17:01,430 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 18:17:01,430 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099421430"}]},"ts":"1689099421430"} 2023-07-11 18:17:01,432 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-11 18:17:01,436 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6bea8348fc35cef12b59ab53d5e74e43, ASSIGN}] 2023-07-11 18:17:01,438 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6bea8348fc35cef12b59ab53d5e74e43, ASSIGN 2023-07-11 18:17:01,439 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=85, ppid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6bea8348fc35cef12b59ab53d5e74e43, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45821,1689099407865; forceNewPlan=false, retain=false 2023-07-11 18:17:01,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-11 18:17:01,591 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=6bea8348fc35cef12b59ab53d5e74e43, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:01,591 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689099421591"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099421591"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099421591"}]},"ts":"1689099421591"} 2023-07-11 18:17:01,593 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=85, state=RUNNABLE; OpenRegionProcedure 6bea8348fc35cef12b59ab53d5e74e43, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:17:01,749 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:01,749 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6bea8348fc35cef12b59ab53d5e74e43, NAME => 'Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:01,750 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:01,750 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:01,750 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:01,750 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:01,752 INFO [StoreOpener-6bea8348fc35cef12b59ab53d5e74e43-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:01,754 DEBUG [StoreOpener-6bea8348fc35cef12b59ab53d5e74e43-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43/f 2023-07-11 18:17:01,754 DEBUG [StoreOpener-6bea8348fc35cef12b59ab53d5e74e43-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43/f 2023-07-11 18:17:01,754 INFO [StoreOpener-6bea8348fc35cef12b59ab53d5e74e43-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6bea8348fc35cef12b59ab53d5e74e43 columnFamilyName f 2023-07-11 18:17:01,755 INFO [StoreOpener-6bea8348fc35cef12b59ab53d5e74e43-1] regionserver.HStore(310): Store=6bea8348fc35cef12b59ab53d5e74e43/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:01,756 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:01,756 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:01,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:01,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:01,762 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6bea8348fc35cef12b59ab53d5e74e43; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10529377120, jitterRate=-0.019375339150428772}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:01,762 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6bea8348fc35cef12b59ab53d5e74e43: 2023-07-11 18:17:01,763 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43., pid=86, masterSystemTime=1689099421745 2023-07-11 18:17:01,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:01,765 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:01,766 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=85 updating hbase:meta row=6bea8348fc35cef12b59ab53d5e74e43, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:01,766 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689099421766"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099421766"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099421766"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099421766"}]},"ts":"1689099421766"} 2023-07-11 18:17:01,769 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=85 2023-07-11 18:17:01,770 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=85, state=SUCCESS; OpenRegionProcedure 6bea8348fc35cef12b59ab53d5e74e43, server=jenkins-hbase4.apache.org,45821,1689099407865 in 175 msec 2023-07-11 18:17:01,772 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-11 18:17:01,772 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6bea8348fc35cef12b59ab53d5e74e43, ASSIGN in 334 msec 2023-07-11 18:17:01,773 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 18:17:01,773 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099421773"}]},"ts":"1689099421773"} 2023-07-11 18:17:01,775 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-11 18:17:01,779 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=84, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 18:17:01,781 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 515 msec 2023-07-11 18:17:01,836 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-11 18:17:01,836 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testFailRemoveGroup' 2023-07-11 18:17:01,837 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-11 18:17:01,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=84 2023-07-11 18:17:01,873 INFO [Listener at localhost/35107] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 84 completed 2023-07-11 18:17:01,873 DEBUG [Listener at localhost/35107] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-11 18:17:01,873 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:01,874 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45471] ipc.CallRunner(144): callId: 276 service: ClientService methodName: Scan size: 96 connection: 172.31.14.131:53286 deadline: 1689099481874, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45821 startCode=1689099407865. As of locationSeqNum=101. 2023-07-11 18:17:01,975 DEBUG [hconnection-0x7b3db8b3-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 18:17:01,977 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43610, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 18:17:01,987 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-11 18:17:01,987 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:01,987 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-11 18:17:01,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-11 18:17:01,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:01,993 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-11 18:17:01,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:01,994 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:01,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-11 18:17:01,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(345): Moving region 6bea8348fc35cef12b59ab53d5e74e43 to RSGroup bar 2023-07-11 18:17:01,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:17:01,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:17:01,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:17:01,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:17:01,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-11 18:17:01,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:17:01,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6bea8348fc35cef12b59ab53d5e74e43, REOPEN/MOVE 2023-07-11 18:17:01,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-11 18:17:01,999 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6bea8348fc35cef12b59ab53d5e74e43, REOPEN/MOVE 2023-07-11 18:17:02,000 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=6bea8348fc35cef12b59ab53d5e74e43, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:02,000 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689099422000"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099422000"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099422000"}]},"ts":"1689099422000"} 2023-07-11 18:17:02,006 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure 6bea8348fc35cef12b59ab53d5e74e43, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:17:02,162 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:02,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6bea8348fc35cef12b59ab53d5e74e43, disabling compactions & flushes 2023-07-11 18:17:02,165 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:02,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:02,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. after waiting 0 ms 2023-07-11 18:17:02,165 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:02,171 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:17:02,172 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:02,172 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6bea8348fc35cef12b59ab53d5e74e43: 2023-07-11 18:17:02,172 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 6bea8348fc35cef12b59ab53d5e74e43 move to jenkins-hbase4.apache.org,37889,1689099411612 record at close sequenceid=2 2023-07-11 18:17:02,175 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:02,176 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=6bea8348fc35cef12b59ab53d5e74e43, regionState=CLOSED 2023-07-11 18:17:02,176 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689099422176"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099422176"}]},"ts":"1689099422176"} 2023-07-11 18:17:02,180 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-11 18:17:02,180 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure 6bea8348fc35cef12b59ab53d5e74e43, server=jenkins-hbase4.apache.org,45821,1689099407865 in 176 msec 2023-07-11 18:17:02,181 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6bea8348fc35cef12b59ab53d5e74e43, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37889,1689099411612; forceNewPlan=false, retain=false 2023-07-11 18:17:02,331 INFO [jenkins-hbase4:45397] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 18:17:02,331 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=6bea8348fc35cef12b59ab53d5e74e43, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:17:02,332 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689099422331"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099422331"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099422331"}]},"ts":"1689099422331"} 2023-07-11 18:17:02,334 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure 6bea8348fc35cef12b59ab53d5e74e43, server=jenkins-hbase4.apache.org,37889,1689099411612}] 2023-07-11 18:17:02,489 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:02,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6bea8348fc35cef12b59ab53d5e74e43, NAME => 'Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:02,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:02,490 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:02,490 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:02,490 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:02,491 INFO [StoreOpener-6bea8348fc35cef12b59ab53d5e74e43-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:02,492 DEBUG [StoreOpener-6bea8348fc35cef12b59ab53d5e74e43-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43/f 2023-07-11 18:17:02,492 DEBUG [StoreOpener-6bea8348fc35cef12b59ab53d5e74e43-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43/f 2023-07-11 18:17:02,493 INFO [StoreOpener-6bea8348fc35cef12b59ab53d5e74e43-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6bea8348fc35cef12b59ab53d5e74e43 columnFamilyName f 2023-07-11 18:17:02,494 INFO [StoreOpener-6bea8348fc35cef12b59ab53d5e74e43-1] regionserver.HStore(310): Store=6bea8348fc35cef12b59ab53d5e74e43/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:02,494 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:02,496 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:02,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:02,500 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6bea8348fc35cef12b59ab53d5e74e43; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11393683360, jitterRate=0.061119452118873596}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:02,500 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6bea8348fc35cef12b59ab53d5e74e43: 2023-07-11 18:17:02,501 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43., pid=89, masterSystemTime=1689099422485 2023-07-11 18:17:02,502 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:02,502 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:02,503 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=6bea8348fc35cef12b59ab53d5e74e43, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:17:02,503 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689099422503"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099422503"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099422503"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099422503"}]},"ts":"1689099422503"} 2023-07-11 18:17:02,507 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-11 18:17:02,507 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure 6bea8348fc35cef12b59ab53d5e74e43, server=jenkins-hbase4.apache.org,37889,1689099411612 in 171 msec 2023-07-11 18:17:02,508 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6bea8348fc35cef12b59ab53d5e74e43, REOPEN/MOVE in 510 msec 2023-07-11 18:17:02,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-11 18:17:02,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-11 18:17:02,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:03,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:03,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:03,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-11 18:17:03,008 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:03,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-11 18:17:03,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:03,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 286 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:57346 deadline: 1689100623009, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-11 18:17:03,010 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:37773] to rsgroup default 2023-07-11 18:17:03,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:03,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 288 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:57346 deadline: 1689100623010, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-11 18:17:03,013 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-11 18:17:03,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:03,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-11 18:17:03,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:03,029 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:03,030 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-11 18:17:03,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(345): Moving region 6bea8348fc35cef12b59ab53d5e74e43 to RSGroup default 2023-07-11 18:17:03,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6bea8348fc35cef12b59ab53d5e74e43, REOPEN/MOVE 2023-07-11 18:17:03,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-11 18:17:03,033 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6bea8348fc35cef12b59ab53d5e74e43, REOPEN/MOVE 2023-07-11 18:17:03,035 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=6bea8348fc35cef12b59ab53d5e74e43, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:17:03,035 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689099423035"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099423035"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099423035"}]},"ts":"1689099423035"} 2023-07-11 18:17:03,037 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE; CloseRegionProcedure 6bea8348fc35cef12b59ab53d5e74e43, server=jenkins-hbase4.apache.org,37889,1689099411612}] 2023-07-11 18:17:03,190 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:03,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6bea8348fc35cef12b59ab53d5e74e43, disabling compactions & flushes 2023-07-11 18:17:03,192 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:03,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:03,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. after waiting 0 ms 2023-07-11 18:17:03,192 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:03,196 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 18:17:03,197 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:03,197 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6bea8348fc35cef12b59ab53d5e74e43: 2023-07-11 18:17:03,197 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 6bea8348fc35cef12b59ab53d5e74e43 move to jenkins-hbase4.apache.org,45821,1689099407865 record at close sequenceid=5 2023-07-11 18:17:03,199 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:03,200 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=6bea8348fc35cef12b59ab53d5e74e43, regionState=CLOSED 2023-07-11 18:17:03,200 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689099423200"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099423200"}]},"ts":"1689099423200"} 2023-07-11 18:17:03,204 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-11 18:17:03,205 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; CloseRegionProcedure 6bea8348fc35cef12b59ab53d5e74e43, server=jenkins-hbase4.apache.org,37889,1689099411612 in 165 msec 2023-07-11 18:17:03,205 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=90, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6bea8348fc35cef12b59ab53d5e74e43, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45821,1689099407865; forceNewPlan=false, retain=false 2023-07-11 18:17:03,356 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=6bea8348fc35cef12b59ab53d5e74e43, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:03,356 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689099423356"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099423356"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099423356"}]},"ts":"1689099423356"} 2023-07-11 18:17:03,358 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=90, state=RUNNABLE; OpenRegionProcedure 6bea8348fc35cef12b59ab53d5e74e43, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:17:03,518 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:03,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6bea8348fc35cef12b59ab53d5e74e43, NAME => 'Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:03,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:03,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:03,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:03,519 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:03,520 INFO [StoreOpener-6bea8348fc35cef12b59ab53d5e74e43-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:03,522 DEBUG [StoreOpener-6bea8348fc35cef12b59ab53d5e74e43-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43/f 2023-07-11 18:17:03,522 DEBUG [StoreOpener-6bea8348fc35cef12b59ab53d5e74e43-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43/f 2023-07-11 18:17:03,522 INFO [StoreOpener-6bea8348fc35cef12b59ab53d5e74e43-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6bea8348fc35cef12b59ab53d5e74e43 columnFamilyName f 2023-07-11 18:17:03,523 INFO [StoreOpener-6bea8348fc35cef12b59ab53d5e74e43-1] regionserver.HStore(310): Store=6bea8348fc35cef12b59ab53d5e74e43/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:03,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:03,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:03,529 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:03,530 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6bea8348fc35cef12b59ab53d5e74e43; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10671900000, jitterRate=-0.006101861596107483}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:03,530 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6bea8348fc35cef12b59ab53d5e74e43: 2023-07-11 18:17:03,531 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43., pid=92, masterSystemTime=1689099423514 2023-07-11 18:17:03,533 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:03,533 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:03,533 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=90 updating hbase:meta row=6bea8348fc35cef12b59ab53d5e74e43, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:03,534 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689099423533"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099423533"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099423533"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099423533"}]},"ts":"1689099423533"} 2023-07-11 18:17:03,537 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=90 2023-07-11 18:17:03,537 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=90, state=SUCCESS; OpenRegionProcedure 6bea8348fc35cef12b59ab53d5e74e43, server=jenkins-hbase4.apache.org,45821,1689099407865 in 177 msec 2023-07-11 18:17:03,538 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6bea8348fc35cef12b59ab53d5e74e43, REOPEN/MOVE in 506 msec 2023-07-11 18:17:04,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure.ProcedureSyncWait(216): waitFor pid=90 2023-07-11 18:17:04,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-11 18:17:04,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:04,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:04,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:04,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-11 18:17:04,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:04,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 295 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:57346 deadline: 1689100624040, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-11 18:17:04,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:37773] to rsgroup default 2023-07-11 18:17:04,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:04,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-11 18:17:04,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:04,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:04,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-11 18:17:04,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37773,1689099407650, jenkins-hbase4.apache.org,37889,1689099411612, jenkins-hbase4.apache.org,45471,1689099407428] are moved back to bar 2023-07-11 18:17:04,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-11 18:17:04,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:04,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:04,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:04,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-11 18:17:04,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:04,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:04,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-11 18:17:04,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:04,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:04,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:04,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:04,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:04,065 INFO [Listener at localhost/35107] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-11 18:17:04,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-11 18:17:04,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-11 18:17:04,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-11 18:17:04,070 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099424070"}]},"ts":"1689099424070"} 2023-07-11 18:17:04,071 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-11 18:17:04,073 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-11 18:17:04,074 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6bea8348fc35cef12b59ab53d5e74e43, UNASSIGN}] 2023-07-11 18:17:04,076 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=94, ppid=93, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6bea8348fc35cef12b59ab53d5e74e43, UNASSIGN 2023-07-11 18:17:04,076 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=6bea8348fc35cef12b59ab53d5e74e43, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:04,077 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689099424076"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099424076"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099424076"}]},"ts":"1689099424076"} 2023-07-11 18:17:04,078 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE; CloseRegionProcedure 6bea8348fc35cef12b59ab53d5e74e43, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:17:04,170 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-11 18:17:04,229 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:04,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6bea8348fc35cef12b59ab53d5e74e43, disabling compactions & flushes 2023-07-11 18:17:04,231 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:04,231 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:04,231 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. after waiting 0 ms 2023-07-11 18:17:04,231 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:04,235 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-11 18:17:04,235 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43. 2023-07-11 18:17:04,235 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6bea8348fc35cef12b59ab53d5e74e43: 2023-07-11 18:17:04,237 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:04,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-11 18:17:04,416 INFO [AsyncFSWAL-0-hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/MasterData-prefix:jenkins-hbase4.apache.org,45397,1689099405546] wal.AbstractFSWAL(1141): Slow sync cost: 179 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:33363,DS-4c181186-22ac-44a8-b1a5-3c334dc774a7,DISK], DatanodeInfoWithStorage[127.0.0.1:41511,DS-8dff4402-f4a4-4098-b391-d4e5069af3ae,DISK], DatanodeInfoWithStorage[127.0.0.1:39467,DS-16d7a147-10c7-4220-b527-dbfb950941dd,DISK]] 2023-07-11 18:17:04,417 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=94 updating hbase:meta row=6bea8348fc35cef12b59ab53d5e74e43, regionState=CLOSED 2023-07-11 18:17:04,417 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689099424416"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099424416"}]},"ts":"1689099424416"} 2023-07-11 18:17:04,422 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-11 18:17:04,422 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; CloseRegionProcedure 6bea8348fc35cef12b59ab53d5e74e43, server=jenkins-hbase4.apache.org,45821,1689099407865 in 342 msec 2023-07-11 18:17:04,424 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=94, resume processing ppid=93 2023-07-11 18:17:04,424 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=94, ppid=93, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=6bea8348fc35cef12b59ab53d5e74e43, UNASSIGN in 348 msec 2023-07-11 18:17:04,424 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099424424"}]},"ts":"1689099424424"} 2023-07-11 18:17:04,426 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-11 18:17:04,428 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-11 18:17:04,430 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 363 msec 2023-07-11 18:17:04,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-11 18:17:04,718 INFO [Listener at localhost/35107] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-11 18:17:04,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-11 18:17:04,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-11 18:17:04,722 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=96, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-11 18:17:04,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-11 18:17:04,723 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=96, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-11 18:17:04,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:04,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:04,726 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:04,728 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:04,730 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43/f, FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43/recovered.edits] 2023-07-11 18:17:04,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-11 18:17:04,736 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43/recovered.edits/10.seqid to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/archive/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43/recovered.edits/10.seqid 2023-07-11 18:17:04,737 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testFailRemoveGroup/6bea8348fc35cef12b59ab53d5e74e43 2023-07-11 18:17:04,737 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-11 18:17:04,740 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=96, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-11 18:17:04,742 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-11 18:17:04,745 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-11 18:17:04,746 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=96, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-11 18:17:04,746 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-11 18:17:04,747 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099424746"}]},"ts":"9223372036854775807"} 2023-07-11 18:17:04,749 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-11 18:17:04,749 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 6bea8348fc35cef12b59ab53d5e74e43, NAME => 'Group_testFailRemoveGroup,,1689099421263.6bea8348fc35cef12b59ab53d5e74e43.', STARTKEY => '', ENDKEY => ''}] 2023-07-11 18:17:04,749 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-11 18:17:04,749 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689099424749"}]},"ts":"9223372036854775807"} 2023-07-11 18:17:04,750 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-11 18:17:04,753 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=96, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-11 18:17:04,755 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=96, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 34 msec 2023-07-11 18:17:04,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=96 2023-07-11 18:17:04,834 INFO [Listener at localhost/35107] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 96 completed 2023-07-11 18:17:04,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:04,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:04,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:04,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:04,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:04,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:04,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:04,845 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:04,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:04,852 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:04,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:04,858 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:04,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:04,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:04,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:04,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:04,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:04,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:04,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:04,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45397] to rsgroup master 2023-07-11 18:17:04,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:04,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 345 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57346 deadline: 1689100624877, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. 2023-07-11 18:17:04,879 WARN [Listener at localhost/35107] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:04,881 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:04,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:04,882 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:04,882 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37773, jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:45821], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:04,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:04,883 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:04,908 INFO [Listener at localhost/35107] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=522 (was 506) Potentially hanging thread: hconnection-0x51dbb1fe-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2dc08bfd-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_364944408_17 at /127.0.0.1:55088 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x51dbb1fe-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2dc08bfd-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-629278552-172.31.14.131-1689099401665:blk_1073741861_1037, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x51dbb1fe-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1964151006_17 at /127.0.0.1:50064 [Receiving block BP-629278552-172.31.14.131-1689099401665:blk_1073741861_1037] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x51dbb1fe-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-629278552-172.31.14.131-1689099401665:blk_1073741861_1037, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582-prefix:jenkins-hbase4.apache.org,45821,1689099407865.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-629278552-172.31.14.131-1689099401665:blk_1073741861_1037, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2dc08bfd-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1964151006_17 at /127.0.0.1:37072 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7b3db8b3-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1964151006_17 at /127.0.0.1:38772 [Receiving block BP-629278552-172.31.14.131-1689099401665:blk_1073741861_1037] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2dc08bfd-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/cluster_66fe2a0c-baee-fc58-4092-355e2e16c678/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/cluster_66fe2a0c-baee-fc58-4092-355e2e16c678/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2dc08bfd-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-176627252_17 at /127.0.0.1:57786 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/cluster_66fe2a0c-baee-fc58-4092-355e2e16c678/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2dc08bfd-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x51dbb1fe-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/cluster_66fe2a0c-baee-fc58-4092-355e2e16c678/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1964151006_17 at /127.0.0.1:57764 [Receiving block BP-629278552-172.31.14.131-1689099401665:blk_1073741861_1037] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=826 (was 821) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=552 (was 582), ProcessCount=172 (was 172), AvailableMemoryMB=2220 (was 2594) 2023-07-11 18:17:04,909 WARN [Listener at localhost/35107] hbase.ResourceChecker(130): Thread=522 is superior to 500 2023-07-11 18:17:04,930 INFO [Listener at localhost/35107] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=522, OpenFileDescriptor=826, MaxFileDescriptor=60000, SystemLoadAverage=552, ProcessCount=172, AvailableMemoryMB=2220 2023-07-11 18:17:04,930 WARN [Listener at localhost/35107] hbase.ResourceChecker(130): Thread=522 is superior to 500 2023-07-11 18:17:04,930 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-11 18:17:04,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:04,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:04,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:04,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:04,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:04,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:04,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:04,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:04,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:04,943 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:04,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:04,948 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:04,949 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:04,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:04,951 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:04,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:04,954 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:04,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:04,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:04,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45397] to rsgroup master 2023-07-11 18:17:04,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:04,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 373 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57346 deadline: 1689100624960, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. 2023-07-11 18:17:04,961 WARN [Listener at localhost/35107] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:04,965 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:04,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:04,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:04,966 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37773, jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:45821], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:04,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:04,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:04,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:04,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:04,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_539869992 2023-07-11 18:17:04,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:04,974 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_539869992 2023-07-11 18:17:04,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:04,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:04,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:04,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:04,983 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:04,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37773] to rsgroup Group_testMultiTableMove_539869992 2023-07-11 18:17:04,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:04,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_539869992 2023-07-11 18:17:04,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:04,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:04,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-11 18:17:04,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37773,1689099407650] are moved back to default 2023-07-11 18:17:04,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_539869992 2023-07-11 18:17:04,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:04,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:04,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:04,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_539869992 2023-07-11 18:17:04,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:04,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 18:17:05,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-11 18:17:05,002 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 18:17:05,002 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 97 2023-07-11 18:17:05,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-11 18:17:05,005 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:05,005 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_539869992 2023-07-11 18:17:05,006 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:05,006 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:05,016 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 18:17:05,018 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:05,018 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23 empty. 2023-07-11 18:17:05,019 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:05,019 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-11 18:17:05,049 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-11 18:17:05,051 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9591da4c5136ac8a8cad67f977f01c23, NAME => 'GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:17:05,074 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:05,074 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 9591da4c5136ac8a8cad67f977f01c23, disabling compactions & flushes 2023-07-11 18:17:05,074 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. 2023-07-11 18:17:05,074 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. 2023-07-11 18:17:05,074 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. after waiting 0 ms 2023-07-11 18:17:05,074 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. 2023-07-11 18:17:05,074 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. 2023-07-11 18:17:05,074 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 9591da4c5136ac8a8cad67f977f01c23: 2023-07-11 18:17:05,079 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 18:17:05,080 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689099425080"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099425080"}]},"ts":"1689099425080"} 2023-07-11 18:17:05,082 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 18:17:05,083 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 18:17:05,083 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099425083"}]},"ts":"1689099425083"} 2023-07-11 18:17:05,084 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-11 18:17:05,090 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:17:05,090 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:17:05,090 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:17:05,090 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:17:05,091 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:17:05,091 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9591da4c5136ac8a8cad67f977f01c23, ASSIGN}] 2023-07-11 18:17:05,093 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9591da4c5136ac8a8cad67f977f01c23, ASSIGN 2023-07-11 18:17:05,094 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9591da4c5136ac8a8cad67f977f01c23, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45471,1689099407428; forceNewPlan=false, retain=false 2023-07-11 18:17:05,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-11 18:17:05,244 INFO [jenkins-hbase4:45397] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 18:17:05,246 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=9591da4c5136ac8a8cad67f977f01c23, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:05,246 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689099425246"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099425246"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099425246"}]},"ts":"1689099425246"} 2023-07-11 18:17:05,248 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 9591da4c5136ac8a8cad67f977f01c23, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:17:05,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-11 18:17:05,404 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. 2023-07-11 18:17:05,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9591da4c5136ac8a8cad67f977f01c23, NAME => 'GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:05,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:05,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:05,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:05,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:05,406 INFO [StoreOpener-9591da4c5136ac8a8cad67f977f01c23-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:05,409 DEBUG [StoreOpener-9591da4c5136ac8a8cad67f977f01c23-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23/f 2023-07-11 18:17:05,409 DEBUG [StoreOpener-9591da4c5136ac8a8cad67f977f01c23-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23/f 2023-07-11 18:17:05,409 INFO [StoreOpener-9591da4c5136ac8a8cad67f977f01c23-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9591da4c5136ac8a8cad67f977f01c23 columnFamilyName f 2023-07-11 18:17:05,410 INFO [StoreOpener-9591da4c5136ac8a8cad67f977f01c23-1] regionserver.HStore(310): Store=9591da4c5136ac8a8cad67f977f01c23/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:05,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:05,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:05,415 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:05,417 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:05,418 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9591da4c5136ac8a8cad67f977f01c23; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11287048960, jitterRate=0.05118834972381592}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:05,418 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9591da4c5136ac8a8cad67f977f01c23: 2023-07-11 18:17:05,419 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23., pid=99, masterSystemTime=1689099425399 2023-07-11 18:17:05,420 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. 2023-07-11 18:17:05,420 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. 2023-07-11 18:17:05,421 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=9591da4c5136ac8a8cad67f977f01c23, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:05,421 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689099425421"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099425421"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099425421"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099425421"}]},"ts":"1689099425421"} 2023-07-11 18:17:05,425 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-11 18:17:05,425 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 9591da4c5136ac8a8cad67f977f01c23, server=jenkins-hbase4.apache.org,45471,1689099407428 in 174 msec 2023-07-11 18:17:05,427 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-11 18:17:05,427 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9591da4c5136ac8a8cad67f977f01c23, ASSIGN in 334 msec 2023-07-11 18:17:05,428 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 18:17:05,428 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099425428"}]},"ts":"1689099425428"} 2023-07-11 18:17:05,430 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-11 18:17:05,434 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 18:17:05,436 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 436 msec 2023-07-11 18:17:05,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-11 18:17:05,608 INFO [Listener at localhost/35107] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 97 completed 2023-07-11 18:17:05,608 DEBUG [Listener at localhost/35107] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-11 18:17:05,609 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:05,636 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-11 18:17:05,636 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:05,636 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-11 18:17:05,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 18:17:05,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-11 18:17:05,644 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 18:17:05,644 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 100 2023-07-11 18:17:05,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-11 18:17:05,649 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:05,650 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_539869992 2023-07-11 18:17:05,653 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:05,654 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:05,658 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 18:17:05,661 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:05,662 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f empty. 2023-07-11 18:17:05,663 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:05,663 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-11 18:17:05,696 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-11 18:17:05,742 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-11 18:17:05,746 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => df8bbea0f7e6464a166e7d021c55121f, NAME => 'GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:17:05,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-11 18:17:05,797 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:05,797 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing df8bbea0f7e6464a166e7d021c55121f, disabling compactions & flushes 2023-07-11 18:17:05,797 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. 2023-07-11 18:17:05,797 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. 2023-07-11 18:17:05,797 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. after waiting 0 ms 2023-07-11 18:17:05,797 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. 2023-07-11 18:17:05,798 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. 2023-07-11 18:17:05,798 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for df8bbea0f7e6464a166e7d021c55121f: 2023-07-11 18:17:05,801 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 18:17:05,802 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689099425802"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099425802"}]},"ts":"1689099425802"} 2023-07-11 18:17:05,803 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 18:17:05,804 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 18:17:05,804 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099425804"}]},"ts":"1689099425804"} 2023-07-11 18:17:05,806 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-11 18:17:05,810 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:17:05,810 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:17:05,810 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:17:05,810 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:17:05,810 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:17:05,811 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=df8bbea0f7e6464a166e7d021c55121f, ASSIGN}] 2023-07-11 18:17:05,813 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=df8bbea0f7e6464a166e7d021c55121f, ASSIGN 2023-07-11 18:17:05,814 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=101, ppid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=df8bbea0f7e6464a166e7d021c55121f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45821,1689099407865; forceNewPlan=false, retain=false 2023-07-11 18:17:05,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-11 18:17:05,964 INFO [jenkins-hbase4:45397] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 18:17:05,965 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=df8bbea0f7e6464a166e7d021c55121f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:05,966 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689099425965"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099425965"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099425965"}]},"ts":"1689099425965"} 2023-07-11 18:17:05,975 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=101, state=RUNNABLE; OpenRegionProcedure df8bbea0f7e6464a166e7d021c55121f, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:17:06,134 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. 2023-07-11 18:17:06,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => df8bbea0f7e6464a166e7d021c55121f, NAME => 'GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:06,134 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:06,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:06,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:06,135 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:06,136 INFO [StoreOpener-df8bbea0f7e6464a166e7d021c55121f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:06,138 DEBUG [StoreOpener-df8bbea0f7e6464a166e7d021c55121f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f/f 2023-07-11 18:17:06,139 DEBUG [StoreOpener-df8bbea0f7e6464a166e7d021c55121f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f/f 2023-07-11 18:17:06,139 INFO [StoreOpener-df8bbea0f7e6464a166e7d021c55121f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region df8bbea0f7e6464a166e7d021c55121f columnFamilyName f 2023-07-11 18:17:06,139 INFO [StoreOpener-df8bbea0f7e6464a166e7d021c55121f-1] regionserver.HStore(310): Store=df8bbea0f7e6464a166e7d021c55121f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:06,140 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:06,141 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:06,144 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:06,146 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:06,147 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened df8bbea0f7e6464a166e7d021c55121f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10510829120, jitterRate=-0.02110275626182556}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:06,147 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for df8bbea0f7e6464a166e7d021c55121f: 2023-07-11 18:17:06,147 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f., pid=102, masterSystemTime=1689099426127 2023-07-11 18:17:06,149 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. 2023-07-11 18:17:06,149 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. 2023-07-11 18:17:06,150 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=df8bbea0f7e6464a166e7d021c55121f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:06,150 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689099426150"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099426150"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099426150"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099426150"}]},"ts":"1689099426150"} 2023-07-11 18:17:06,154 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=101 2023-07-11 18:17:06,154 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=101, state=SUCCESS; OpenRegionProcedure df8bbea0f7e6464a166e7d021c55121f, server=jenkins-hbase4.apache.org,45821,1689099407865 in 177 msec 2023-07-11 18:17:06,156 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=100 2023-07-11 18:17:06,156 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=df8bbea0f7e6464a166e7d021c55121f, ASSIGN in 343 msec 2023-07-11 18:17:06,157 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 18:17:06,157 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099426157"}]},"ts":"1689099426157"} 2023-07-11 18:17:06,158 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-11 18:17:06,163 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=100, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 18:17:06,164 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 525 msec 2023-07-11 18:17:06,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=100 2023-07-11 18:17:06,250 INFO [Listener at localhost/35107] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 100 completed 2023-07-11 18:17:06,251 DEBUG [Listener at localhost/35107] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-11 18:17:06,251 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:06,254 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-11 18:17:06,254 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:06,254 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-11 18:17:06,255 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:06,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-11 18:17:06,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 18:17:06,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-11 18:17:06,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 18:17:06,270 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_539869992 2023-07-11 18:17:06,274 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_539869992 2023-07-11 18:17:06,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:06,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_539869992 2023-07-11 18:17:06,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:06,279 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:06,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_539869992 2023-07-11 18:17:06,280 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(345): Moving region df8bbea0f7e6464a166e7d021c55121f to RSGroup Group_testMultiTableMove_539869992 2023-07-11 18:17:06,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=df8bbea0f7e6464a166e7d021c55121f, REOPEN/MOVE 2023-07-11 18:17:06,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_539869992 2023-07-11 18:17:06,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(345): Moving region 9591da4c5136ac8a8cad67f977f01c23 to RSGroup Group_testMultiTableMove_539869992 2023-07-11 18:17:06,282 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=df8bbea0f7e6464a166e7d021c55121f, REOPEN/MOVE 2023-07-11 18:17:06,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9591da4c5136ac8a8cad67f977f01c23, REOPEN/MOVE 2023-07-11 18:17:06,284 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=df8bbea0f7e6464a166e7d021c55121f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:06,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_539869992, current retry=0 2023-07-11 18:17:06,288 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9591da4c5136ac8a8cad67f977f01c23, REOPEN/MOVE 2023-07-11 18:17:06,288 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689099426284"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099426284"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099426284"}]},"ts":"1689099426284"} 2023-07-11 18:17:06,290 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=9591da4c5136ac8a8cad67f977f01c23, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:06,290 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689099426290"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099426290"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099426290"}]},"ts":"1689099426290"} 2023-07-11 18:17:06,293 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=103, state=RUNNABLE; CloseRegionProcedure df8bbea0f7e6464a166e7d021c55121f, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:17:06,295 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=106, ppid=104, state=RUNNABLE; CloseRegionProcedure 9591da4c5136ac8a8cad67f977f01c23, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:17:06,454 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:06,456 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing df8bbea0f7e6464a166e7d021c55121f, disabling compactions & flushes 2023-07-11 18:17:06,456 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. 2023-07-11 18:17:06,456 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. 2023-07-11 18:17:06,456 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. after waiting 0 ms 2023-07-11 18:17:06,456 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. 2023-07-11 18:17:06,456 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:06,458 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9591da4c5136ac8a8cad67f977f01c23, disabling compactions & flushes 2023-07-11 18:17:06,458 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. 2023-07-11 18:17:06,458 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. 2023-07-11 18:17:06,458 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. after waiting 0 ms 2023-07-11 18:17:06,458 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. 2023-07-11 18:17:06,472 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:17:06,474 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. 2023-07-11 18:17:06,474 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for df8bbea0f7e6464a166e7d021c55121f: 2023-07-11 18:17:06,474 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding df8bbea0f7e6464a166e7d021c55121f move to jenkins-hbase4.apache.org,37773,1689099407650 record at close sequenceid=2 2023-07-11 18:17:06,476 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:17:06,478 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. 2023-07-11 18:17:06,479 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9591da4c5136ac8a8cad67f977f01c23: 2023-07-11 18:17:06,479 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 9591da4c5136ac8a8cad67f977f01c23 move to jenkins-hbase4.apache.org,37773,1689099407650 record at close sequenceid=2 2023-07-11 18:17:06,480 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=df8bbea0f7e6464a166e7d021c55121f, regionState=CLOSED 2023-07-11 18:17:06,480 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689099426480"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099426480"}]},"ts":"1689099426480"} 2023-07-11 18:17:06,481 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:06,482 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:06,482 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=9591da4c5136ac8a8cad67f977f01c23, regionState=CLOSED 2023-07-11 18:17:06,483 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689099426482"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099426482"}]},"ts":"1689099426482"} 2023-07-11 18:17:06,485 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=103 2023-07-11 18:17:06,485 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=103, state=SUCCESS; CloseRegionProcedure df8bbea0f7e6464a166e7d021c55121f, server=jenkins-hbase4.apache.org,45821,1689099407865 in 190 msec 2023-07-11 18:17:06,486 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=103, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=df8bbea0f7e6464a166e7d021c55121f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37773,1689099407650; forceNewPlan=false, retain=false 2023-07-11 18:17:06,487 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=106, resume processing ppid=104 2023-07-11 18:17:06,487 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=106, ppid=104, state=SUCCESS; CloseRegionProcedure 9591da4c5136ac8a8cad67f977f01c23, server=jenkins-hbase4.apache.org,45471,1689099407428 in 189 msec 2023-07-11 18:17:06,489 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=104, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9591da4c5136ac8a8cad67f977f01c23, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37773,1689099407650; forceNewPlan=false, retain=false 2023-07-11 18:17:06,637 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=df8bbea0f7e6464a166e7d021c55121f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:17:06,637 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=9591da4c5136ac8a8cad67f977f01c23, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:17:06,637 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689099426637"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099426637"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099426637"}]},"ts":"1689099426637"} 2023-07-11 18:17:06,637 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689099426637"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099426637"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099426637"}]},"ts":"1689099426637"} 2023-07-11 18:17:06,639 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=103, state=RUNNABLE; OpenRegionProcedure df8bbea0f7e6464a166e7d021c55121f, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:17:06,639 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=104, state=RUNNABLE; OpenRegionProcedure 9591da4c5136ac8a8cad67f977f01c23, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:17:06,798 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. 2023-07-11 18:17:06,798 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9591da4c5136ac8a8cad67f977f01c23, NAME => 'GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:06,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:06,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:06,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:06,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:06,802 INFO [StoreOpener-9591da4c5136ac8a8cad67f977f01c23-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:06,814 DEBUG [StoreOpener-9591da4c5136ac8a8cad67f977f01c23-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23/f 2023-07-11 18:17:06,815 DEBUG [StoreOpener-9591da4c5136ac8a8cad67f977f01c23-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23/f 2023-07-11 18:17:06,815 INFO [StoreOpener-9591da4c5136ac8a8cad67f977f01c23-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9591da4c5136ac8a8cad67f977f01c23 columnFamilyName f 2023-07-11 18:17:06,818 INFO [StoreOpener-9591da4c5136ac8a8cad67f977f01c23-1] regionserver.HStore(310): Store=9591da4c5136ac8a8cad67f977f01c23/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:06,819 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:06,821 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:06,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:06,827 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9591da4c5136ac8a8cad67f977f01c23; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10682572480, jitterRate=-0.005107909440994263}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:06,827 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9591da4c5136ac8a8cad67f977f01c23: 2023-07-11 18:17:06,828 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23., pid=108, masterSystemTime=1689099426791 2023-07-11 18:17:06,831 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. 2023-07-11 18:17:06,831 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. 2023-07-11 18:17:06,831 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. 2023-07-11 18:17:06,831 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => df8bbea0f7e6464a166e7d021c55121f, NAME => 'GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:06,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:06,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:06,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:06,832 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:06,833 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=9591da4c5136ac8a8cad67f977f01c23, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:17:06,833 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689099426833"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099426833"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099426833"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099426833"}]},"ts":"1689099426833"} 2023-07-11 18:17:06,838 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=104 2023-07-11 18:17:06,838 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=104, state=SUCCESS; OpenRegionProcedure 9591da4c5136ac8a8cad67f977f01c23, server=jenkins-hbase4.apache.org,37773,1689099407650 in 196 msec 2023-07-11 18:17:06,840 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=104, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9591da4c5136ac8a8cad67f977f01c23, REOPEN/MOVE in 555 msec 2023-07-11 18:17:06,844 INFO [StoreOpener-df8bbea0f7e6464a166e7d021c55121f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:06,845 DEBUG [StoreOpener-df8bbea0f7e6464a166e7d021c55121f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f/f 2023-07-11 18:17:06,846 DEBUG [StoreOpener-df8bbea0f7e6464a166e7d021c55121f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f/f 2023-07-11 18:17:06,846 INFO [StoreOpener-df8bbea0f7e6464a166e7d021c55121f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region df8bbea0f7e6464a166e7d021c55121f columnFamilyName f 2023-07-11 18:17:06,847 INFO [StoreOpener-df8bbea0f7e6464a166e7d021c55121f-1] regionserver.HStore(310): Store=df8bbea0f7e6464a166e7d021c55121f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:06,848 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:06,849 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:06,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:06,854 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened df8bbea0f7e6464a166e7d021c55121f; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11396978560, jitterRate=0.06142634153366089}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:06,855 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for df8bbea0f7e6464a166e7d021c55121f: 2023-07-11 18:17:06,855 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f., pid=107, masterSystemTime=1689099426791 2023-07-11 18:17:06,857 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. 2023-07-11 18:17:06,857 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. 2023-07-11 18:17:06,858 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=103 updating hbase:meta row=df8bbea0f7e6464a166e7d021c55121f, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:17:06,858 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689099426858"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099426858"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099426858"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099426858"}]},"ts":"1689099426858"} 2023-07-11 18:17:06,876 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=103 2023-07-11 18:17:06,877 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=103, state=SUCCESS; OpenRegionProcedure df8bbea0f7e6464a166e7d021c55121f, server=jenkins-hbase4.apache.org,37773,1689099407650 in 235 msec 2023-07-11 18:17:06,878 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=df8bbea0f7e6464a166e7d021c55121f, REOPEN/MOVE in 597 msec 2023-07-11 18:17:07,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure.ProcedureSyncWait(216): waitFor pid=103 2023-07-11 18:17:07,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_539869992. 2023-07-11 18:17:07,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:07,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:07,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:07,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-11 18:17:07,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 18:17:07,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-11 18:17:07,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 18:17:07,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:07,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:07,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_539869992 2023-07-11 18:17:07,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:07,300 INFO [Listener at localhost/35107] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-11 18:17:07,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-11 18:17:07,301 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-11 18:17:07,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-11 18:17:07,304 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099427304"}]},"ts":"1689099427304"} 2023-07-11 18:17:07,306 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-11 18:17:07,308 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-11 18:17:07,309 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9591da4c5136ac8a8cad67f977f01c23, UNASSIGN}] 2023-07-11 18:17:07,310 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=110, ppid=109, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9591da4c5136ac8a8cad67f977f01c23, UNASSIGN 2023-07-11 18:17:07,311 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=9591da4c5136ac8a8cad67f977f01c23, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:17:07,311 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689099427311"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099427311"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099427311"}]},"ts":"1689099427311"} 2023-07-11 18:17:07,312 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE; CloseRegionProcedure 9591da4c5136ac8a8cad67f977f01c23, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:17:07,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-11 18:17:07,464 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:07,465 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9591da4c5136ac8a8cad67f977f01c23, disabling compactions & flushes 2023-07-11 18:17:07,466 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. 2023-07-11 18:17:07,466 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. 2023-07-11 18:17:07,466 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. after waiting 0 ms 2023-07-11 18:17:07,466 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. 2023-07-11 18:17:07,469 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 18:17:07,470 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23. 2023-07-11 18:17:07,470 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9591da4c5136ac8a8cad67f977f01c23: 2023-07-11 18:17:07,472 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:07,472 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=110 updating hbase:meta row=9591da4c5136ac8a8cad67f977f01c23, regionState=CLOSED 2023-07-11 18:17:07,472 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689099427472"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099427472"}]},"ts":"1689099427472"} 2023-07-11 18:17:07,475 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-11 18:17:07,475 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; CloseRegionProcedure 9591da4c5136ac8a8cad67f977f01c23, server=jenkins-hbase4.apache.org,37773,1689099407650 in 162 msec 2023-07-11 18:17:07,477 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=110, resume processing ppid=109 2023-07-11 18:17:07,477 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=110, ppid=109, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=9591da4c5136ac8a8cad67f977f01c23, UNASSIGN in 166 msec 2023-07-11 18:17:07,478 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099427478"}]},"ts":"1689099427478"} 2023-07-11 18:17:07,479 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-11 18:17:07,481 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-11 18:17:07,482 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 180 msec 2023-07-11 18:17:07,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-11 18:17:07,607 INFO [Listener at localhost/35107] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-11 18:17:07,607 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-11 18:17:07,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-11 18:17:07,610 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=112, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-11 18:17:07,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_539869992' 2023-07-11 18:17:07,611 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=112, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-11 18:17:07,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:07,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_539869992 2023-07-11 18:17:07,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:07,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:07,615 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:07,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-11 18:17:07,617 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23/f, FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23/recovered.edits] 2023-07-11 18:17:07,623 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23/recovered.edits/7.seqid to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/archive/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23/recovered.edits/7.seqid 2023-07-11 18:17:07,623 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/GrouptestMultiTableMoveA/9591da4c5136ac8a8cad67f977f01c23 2023-07-11 18:17:07,623 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-11 18:17:07,626 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=112, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-11 18:17:07,628 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-11 18:17:07,629 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-11 18:17:07,630 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=112, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-11 18:17:07,631 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-11 18:17:07,631 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099427631"}]},"ts":"9223372036854775807"} 2023-07-11 18:17:07,632 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-11 18:17:07,632 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 9591da4c5136ac8a8cad67f977f01c23, NAME => 'GrouptestMultiTableMoveA,,1689099424998.9591da4c5136ac8a8cad67f977f01c23.', STARTKEY => '', ENDKEY => ''}] 2023-07-11 18:17:07,632 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-11 18:17:07,632 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689099427632"}]},"ts":"9223372036854775807"} 2023-07-11 18:17:07,634 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-11 18:17:07,635 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=112, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-11 18:17:07,637 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=112, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 27 msec 2023-07-11 18:17:07,717 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=112 2023-07-11 18:17:07,718 INFO [Listener at localhost/35107] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 112 completed 2023-07-11 18:17:07,718 INFO [Listener at localhost/35107] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-11 18:17:07,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-11 18:17:07,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-11 18:17:07,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-11 18:17:07,723 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099427723"}]},"ts":"1689099427723"} 2023-07-11 18:17:07,724 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-11 18:17:07,729 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-11 18:17:07,730 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=df8bbea0f7e6464a166e7d021c55121f, UNASSIGN}] 2023-07-11 18:17:07,731 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, ppid=113, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=df8bbea0f7e6464a166e7d021c55121f, UNASSIGN 2023-07-11 18:17:07,732 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=df8bbea0f7e6464a166e7d021c55121f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:17:07,732 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689099427732"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099427732"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099427732"}]},"ts":"1689099427732"} 2023-07-11 18:17:07,734 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure df8bbea0f7e6464a166e7d021c55121f, server=jenkins-hbase4.apache.org,37773,1689099407650}] 2023-07-11 18:17:07,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-11 18:17:07,831 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'GrouptestMultiTableMoveB' 2023-07-11 18:17:07,885 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:07,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing df8bbea0f7e6464a166e7d021c55121f, disabling compactions & flushes 2023-07-11 18:17:07,887 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. 2023-07-11 18:17:07,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. 2023-07-11 18:17:07,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. after waiting 0 ms 2023-07-11 18:17:07,887 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. 2023-07-11 18:17:07,890 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 18:17:07,891 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f. 2023-07-11 18:17:07,891 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for df8bbea0f7e6464a166e7d021c55121f: 2023-07-11 18:17:07,893 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:07,893 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=df8bbea0f7e6464a166e7d021c55121f, regionState=CLOSED 2023-07-11 18:17:07,893 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689099427893"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099427893"}]},"ts":"1689099427893"} 2023-07-11 18:17:07,896 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-11 18:17:07,896 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure df8bbea0f7e6464a166e7d021c55121f, server=jenkins-hbase4.apache.org,37773,1689099407650 in 161 msec 2023-07-11 18:17:07,897 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=114, resume processing ppid=113 2023-07-11 18:17:07,897 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=114, ppid=113, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=df8bbea0f7e6464a166e7d021c55121f, UNASSIGN in 166 msec 2023-07-11 18:17:07,898 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099427898"}]},"ts":"1689099427898"} 2023-07-11 18:17:07,899 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-11 18:17:07,901 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-11 18:17:07,902 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 182 msec 2023-07-11 18:17:08,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-11 18:17:08,025 INFO [Listener at localhost/35107] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-11 18:17:08,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-11 18:17:08,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-11 18:17:08,028 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=116, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-11 18:17:08,028 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_539869992' 2023-07-11 18:17:08,029 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=116, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-11 18:17:08,030 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_539869992 2023-07-11 18:17:08,031 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:08,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:08,033 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:08,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-11 18:17:08,034 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f/f, FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f/recovered.edits] 2023-07-11 18:17:08,039 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f/recovered.edits/7.seqid to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/archive/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f/recovered.edits/7.seqid 2023-07-11 18:17:08,040 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/GrouptestMultiTableMoveB/df8bbea0f7e6464a166e7d021c55121f 2023-07-11 18:17:08,040 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-11 18:17:08,042 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=116, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-11 18:17:08,045 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-11 18:17:08,047 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-11 18:17:08,048 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=116, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-11 18:17:08,048 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-11 18:17:08,048 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099428048"}]},"ts":"9223372036854775807"} 2023-07-11 18:17:08,049 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-11 18:17:08,050 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => df8bbea0f7e6464a166e7d021c55121f, NAME => 'GrouptestMultiTableMoveB,,1689099425638.df8bbea0f7e6464a166e7d021c55121f.', STARTKEY => '', ENDKEY => ''}] 2023-07-11 18:17:08,050 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-11 18:17:08,050 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689099428050"}]},"ts":"9223372036854775807"} 2023-07-11 18:17:08,051 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-11 18:17:08,053 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=116, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-11 18:17:08,054 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=116, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 27 msec 2023-07-11 18:17:08,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=116 2023-07-11 18:17:08,135 INFO [Listener at localhost/35107] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 116 completed 2023-07-11 18:17:08,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:08,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:08,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:08,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:08,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:08,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37773] to rsgroup default 2023-07-11 18:17:08,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_539869992 2023-07-11 18:17:08,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:08,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:08,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_539869992, current retry=0 2023-07-11 18:17:08,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37773,1689099407650] are moved back to Group_testMultiTableMove_539869992 2023-07-11 18:17:08,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_539869992 => default 2023-07-11 18:17:08,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:08,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_539869992 2023-07-11 18:17:08,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:08,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-11 18:17:08,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:08,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:08,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:08,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:08,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:08,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:08,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:08,158 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:08,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:08,163 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:08,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:08,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:08,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:08,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:08,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:08,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:08,174 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45397] to rsgroup master 2023-07-11 18:17:08,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:08,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 511 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57346 deadline: 1689100628174, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. 2023-07-11 18:17:08,175 WARN [Listener at localhost/35107] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:08,177 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:08,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:08,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:08,178 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37773, jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:45821], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:08,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:08,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:08,196 INFO [Listener at localhost/35107] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=521 (was 522), OpenFileDescriptor=823 (was 826), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=603 (was 552) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 172), AvailableMemoryMB=2126 (was 2220) 2023-07-11 18:17:08,196 WARN [Listener at localhost/35107] hbase.ResourceChecker(130): Thread=521 is superior to 500 2023-07-11 18:17:08,213 INFO [Listener at localhost/35107] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=521, OpenFileDescriptor=823, MaxFileDescriptor=60000, SystemLoadAverage=603, ProcessCount=172, AvailableMemoryMB=2125 2023-07-11 18:17:08,213 WARN [Listener at localhost/35107] hbase.ResourceChecker(130): Thread=521 is superior to 500 2023-07-11 18:17:08,213 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-11 18:17:08,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:08,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:08,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:08,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:08,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:08,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:08,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:08,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:08,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:08,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:08,227 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:08,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:08,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:08,232 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:08,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:08,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:08,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:08,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45397] to rsgroup master 2023-07-11 18:17:08,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:08,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 539 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57346 deadline: 1689100628242, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. 2023-07-11 18:17:08,243 WARN [Listener at localhost/35107] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:08,245 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:08,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:08,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:08,246 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37773, jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:45821], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:08,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:08,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:08,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:08,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:08,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-11 18:17:08,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-11 18:17:08,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:08,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:08,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:08,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:08,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:08,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:37773] to rsgroup oldGroup 2023-07-11 18:17:08,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-11 18:17:08,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:08,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:08,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-11 18:17:08,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37773,1689099407650, jenkins-hbase4.apache.org,37889,1689099411612] are moved back to default 2023-07-11 18:17:08,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-11 18:17:08,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:08,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:08,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:08,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-11 18:17:08,289 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:08,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-11 18:17:08,290 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:08,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:08,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:08,292 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-11 18:17:08,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,295 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-11 18:17:08,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-11 18:17:08,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:08,298 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 18:17:08,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:08,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:08,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:08,308 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45471] to rsgroup anotherRSGroup 2023-07-11 18:17:08,310 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-11 18:17:08,311 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-11 18:17:08,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:08,312 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 18:17:08,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-11 18:17:08,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,45471,1689099407428] are moved back to default 2023-07-11 18:17:08,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-11 18:17:08,314 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:08,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:08,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:08,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-11 18:17:08,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:08,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-11 18:17:08,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:08,327 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-11 18:17:08,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:08,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 573 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:57346 deadline: 1689100628326, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-11 18:17:08,328 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-11 18:17:08,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:08,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 575 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:57346 deadline: 1689100628328, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-11 18:17:08,329 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-11 18:17:08,329 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:08,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 577 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:57346 deadline: 1689100628329, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-11 18:17:08,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-11 18:17:08,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:08,330 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 579 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:57346 deadline: 1689100628330, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-11 18:17:08,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:08,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:08,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:08,334 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:08,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:08,335 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45471] to rsgroup default 2023-07-11 18:17:08,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-11 18:17:08,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-11 18:17:08,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:08,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 18:17:08,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-11 18:17:08,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,45471,1689099407428] are moved back to anotherRSGroup 2023-07-11 18:17:08,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-11 18:17:08,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:08,343 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-11 18:17:08,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-11 18:17:08,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:08,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-11 18:17:08,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:08,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:08,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:08,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:08,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:37773] to rsgroup default 2023-07-11 18:17:08,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,352 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-11 18:17:08,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:08,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:08,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-11 18:17:08,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37773,1689099407650, jenkins-hbase4.apache.org,37889,1689099411612] are moved back to oldGroup 2023-07-11 18:17:08,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-11 18:17:08,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:08,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-11 18:17:08,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:08,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-11 18:17:08,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:08,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:08,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:08,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:08,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:08,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:08,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:08,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:08,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:08,372 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:08,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:08,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:08,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:08,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:08,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:08,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:08,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45397] to rsgroup master 2023-07-11 18:17:08,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:08,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 615 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57346 deadline: 1689100628384, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. 2023-07-11 18:17:08,384 WARN [Listener at localhost/35107] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:08,386 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:08,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:08,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:08,387 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37773, jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:45821], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:08,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:08,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:08,408 INFO [Listener at localhost/35107] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=525 (was 521) Potentially hanging thread: hconnection-0x51dbb1fe-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x51dbb1fe-shared-pool-20 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x51dbb1fe-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x51dbb1fe-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=823 (was 823), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=603 (was 603), ProcessCount=172 (was 172), AvailableMemoryMB=2125 (was 2125) 2023-07-11 18:17:08,408 WARN [Listener at localhost/35107] hbase.ResourceChecker(130): Thread=525 is superior to 500 2023-07-11 18:17:08,427 INFO [Listener at localhost/35107] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=525, OpenFileDescriptor=823, MaxFileDescriptor=60000, SystemLoadAverage=603, ProcessCount=172, AvailableMemoryMB=2124 2023-07-11 18:17:08,427 WARN [Listener at localhost/35107] hbase.ResourceChecker(130): Thread=525 is superior to 500 2023-07-11 18:17:08,427 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-11 18:17:08,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:08,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:08,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:08,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:08,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:08,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:08,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:08,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:08,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,437 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:08,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:08,441 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:08,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:08,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,444 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:08,445 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:08,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:08,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:08,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:08,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45397] to rsgroup master 2023-07-11 18:17:08,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:08,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 643 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57346 deadline: 1689100628455, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. 2023-07-11 18:17:08,456 WARN [Listener at localhost/35107] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:08,457 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:08,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:08,458 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:08,458 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37773, jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:45821], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:08,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:08,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:08,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:08,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:08,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-11 18:17:08,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-11 18:17:08,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:08,465 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:08,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:08,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:08,470 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:08,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:37773] to rsgroup oldgroup 2023-07-11 18:17:08,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-11 18:17:08,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:08,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:08,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-11 18:17:08,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37773,1689099407650, jenkins-hbase4.apache.org,37889,1689099411612] are moved back to default 2023-07-11 18:17:08,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-11 18:17:08,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:08,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:08,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:08,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-11 18:17:08,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:08,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 18:17:08,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-11 18:17:08,486 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 18:17:08,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 117 2023-07-11 18:17:08,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-11 18:17:08,488 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-11 18:17:08,489 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:08,489 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:08,489 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:08,491 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 18:17:08,493 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/testRename/4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:08,494 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/testRename/4c6a8d44fa4722459b1960c3bec455c9 empty. 2023-07-11 18:17:08,494 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/testRename/4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:08,494 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-11 18:17:08,588 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-11 18:17:08,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-11 18:17:08,909 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-11 18:17:08,911 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 4c6a8d44fa4722459b1960c3bec455c9, NAME => 'testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:17:08,926 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:08,927 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 4c6a8d44fa4722459b1960c3bec455c9, disabling compactions & flushes 2023-07-11 18:17:08,927 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:08,927 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:08,927 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. after waiting 0 ms 2023-07-11 18:17:08,927 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:08,927 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:08,927 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 4c6a8d44fa4722459b1960c3bec455c9: 2023-07-11 18:17:08,929 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 18:17:08,930 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689099428930"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099428930"}]},"ts":"1689099428930"} 2023-07-11 18:17:08,931 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 18:17:08,932 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 18:17:08,932 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099428932"}]},"ts":"1689099428932"} 2023-07-11 18:17:08,933 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-11 18:17:08,936 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:17:08,936 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:17:08,936 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:17:08,936 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:17:08,936 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=4c6a8d44fa4722459b1960c3bec455c9, ASSIGN}] 2023-07-11 18:17:08,938 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=4c6a8d44fa4722459b1960c3bec455c9, ASSIGN 2023-07-11 18:17:08,938 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=4c6a8d44fa4722459b1960c3bec455c9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45471,1689099407428; forceNewPlan=false, retain=false 2023-07-11 18:17:09,089 INFO [jenkins-hbase4:45397] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 18:17:09,090 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=4c6a8d44fa4722459b1960c3bec455c9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:09,090 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689099429090"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099429090"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099429090"}]},"ts":"1689099429090"} 2023-07-11 18:17:09,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-11 18:17:09,092 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=118, state=RUNNABLE; OpenRegionProcedure 4c6a8d44fa4722459b1960c3bec455c9, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:17:09,246 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:09,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4c6a8d44fa4722459b1960c3bec455c9, NAME => 'testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:09,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:09,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:09,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:09,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:09,248 INFO [StoreOpener-4c6a8d44fa4722459b1960c3bec455c9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:09,250 DEBUG [StoreOpener-4c6a8d44fa4722459b1960c3bec455c9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/testRename/4c6a8d44fa4722459b1960c3bec455c9/tr 2023-07-11 18:17:09,250 DEBUG [StoreOpener-4c6a8d44fa4722459b1960c3bec455c9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/testRename/4c6a8d44fa4722459b1960c3bec455c9/tr 2023-07-11 18:17:09,250 INFO [StoreOpener-4c6a8d44fa4722459b1960c3bec455c9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4c6a8d44fa4722459b1960c3bec455c9 columnFamilyName tr 2023-07-11 18:17:09,251 INFO [StoreOpener-4c6a8d44fa4722459b1960c3bec455c9-1] regionserver.HStore(310): Store=4c6a8d44fa4722459b1960c3bec455c9/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:09,251 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/testRename/4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:09,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/testRename/4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:09,255 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:09,257 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/testRename/4c6a8d44fa4722459b1960c3bec455c9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:09,257 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4c6a8d44fa4722459b1960c3bec455c9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9596313760, jitterRate=-0.10627363622188568}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:09,257 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4c6a8d44fa4722459b1960c3bec455c9: 2023-07-11 18:17:09,258 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9., pid=119, masterSystemTime=1689099429243 2023-07-11 18:17:09,259 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:09,259 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:09,260 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=4c6a8d44fa4722459b1960c3bec455c9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:09,260 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689099429260"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099429260"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099429260"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099429260"}]},"ts":"1689099429260"} 2023-07-11 18:17:09,262 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=118 2023-07-11 18:17:09,263 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=118, state=SUCCESS; OpenRegionProcedure 4c6a8d44fa4722459b1960c3bec455c9, server=jenkins-hbase4.apache.org,45471,1689099407428 in 169 msec 2023-07-11 18:17:09,264 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-11 18:17:09,264 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=4c6a8d44fa4722459b1960c3bec455c9, ASSIGN in 327 msec 2023-07-11 18:17:09,265 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 18:17:09,265 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099429265"}]},"ts":"1689099429265"} 2023-07-11 18:17:09,266 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-11 18:17:09,268 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 18:17:09,269 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; CreateTableProcedure table=testRename in 784 msec 2023-07-11 18:17:09,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-11 18:17:09,592 INFO [Listener at localhost/35107] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 117 completed 2023-07-11 18:17:09,592 DEBUG [Listener at localhost/35107] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-11 18:17:09,593 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:09,612 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-11 18:17:09,612 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:09,612 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-11 18:17:09,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-11 18:17:09,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-11 18:17:09,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:09,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:09,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:09,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-11 18:17:09,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(345): Moving region 4c6a8d44fa4722459b1960c3bec455c9 to RSGroup oldgroup 2023-07-11 18:17:09,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:17:09,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:17:09,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:17:09,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:17:09,620 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:17:09,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=4c6a8d44fa4722459b1960c3bec455c9, REOPEN/MOVE 2023-07-11 18:17:09,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-11 18:17:09,621 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=4c6a8d44fa4722459b1960c3bec455c9, REOPEN/MOVE 2023-07-11 18:17:09,622 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=4c6a8d44fa4722459b1960c3bec455c9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:09,622 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689099429622"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099429622"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099429622"}]},"ts":"1689099429622"} 2023-07-11 18:17:09,624 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure 4c6a8d44fa4722459b1960c3bec455c9, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:17:09,776 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:09,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4c6a8d44fa4722459b1960c3bec455c9, disabling compactions & flushes 2023-07-11 18:17:09,778 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:09,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:09,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. after waiting 0 ms 2023-07-11 18:17:09,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:09,782 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/testRename/4c6a8d44fa4722459b1960c3bec455c9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:17:09,783 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:09,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4c6a8d44fa4722459b1960c3bec455c9: 2023-07-11 18:17:09,783 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 4c6a8d44fa4722459b1960c3bec455c9 move to jenkins-hbase4.apache.org,37889,1689099411612 record at close sequenceid=2 2023-07-11 18:17:09,784 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:09,785 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=4c6a8d44fa4722459b1960c3bec455c9, regionState=CLOSED 2023-07-11 18:17:09,785 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689099429785"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099429785"}]},"ts":"1689099429785"} 2023-07-11 18:17:09,788 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-11 18:17:09,788 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure 4c6a8d44fa4722459b1960c3bec455c9, server=jenkins-hbase4.apache.org,45471,1689099407428 in 163 msec 2023-07-11 18:17:09,789 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=4c6a8d44fa4722459b1960c3bec455c9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37889,1689099411612; forceNewPlan=false, retain=false 2023-07-11 18:17:09,939 INFO [jenkins-hbase4:45397] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 18:17:09,939 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=4c6a8d44fa4722459b1960c3bec455c9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:17:09,940 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689099429939"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099429939"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099429939"}]},"ts":"1689099429939"} 2023-07-11 18:17:09,941 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure 4c6a8d44fa4722459b1960c3bec455c9, server=jenkins-hbase4.apache.org,37889,1689099411612}] 2023-07-11 18:17:10,097 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:10,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4c6a8d44fa4722459b1960c3bec455c9, NAME => 'testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:10,097 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:10,098 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:10,098 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:10,098 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:10,099 INFO [StoreOpener-4c6a8d44fa4722459b1960c3bec455c9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:10,100 DEBUG [StoreOpener-4c6a8d44fa4722459b1960c3bec455c9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/testRename/4c6a8d44fa4722459b1960c3bec455c9/tr 2023-07-11 18:17:10,100 DEBUG [StoreOpener-4c6a8d44fa4722459b1960c3bec455c9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/testRename/4c6a8d44fa4722459b1960c3bec455c9/tr 2023-07-11 18:17:10,100 INFO [StoreOpener-4c6a8d44fa4722459b1960c3bec455c9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4c6a8d44fa4722459b1960c3bec455c9 columnFamilyName tr 2023-07-11 18:17:10,101 INFO [StoreOpener-4c6a8d44fa4722459b1960c3bec455c9-1] regionserver.HStore(310): Store=4c6a8d44fa4722459b1960c3bec455c9/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:10,102 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/testRename/4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:10,103 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/testRename/4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:10,105 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:10,106 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4c6a8d44fa4722459b1960c3bec455c9; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10668719040, jitterRate=-0.006398111581802368}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:10,106 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4c6a8d44fa4722459b1960c3bec455c9: 2023-07-11 18:17:10,107 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9., pid=122, masterSystemTime=1689099430093 2023-07-11 18:17:10,109 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:10,109 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:10,109 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=4c6a8d44fa4722459b1960c3bec455c9, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:17:10,109 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689099430109"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099430109"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099430109"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099430109"}]},"ts":"1689099430109"} 2023-07-11 18:17:10,112 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-11 18:17:10,112 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure 4c6a8d44fa4722459b1960c3bec455c9, server=jenkins-hbase4.apache.org,37889,1689099411612 in 169 msec 2023-07-11 18:17:10,113 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=4c6a8d44fa4722459b1960c3bec455c9, REOPEN/MOVE in 492 msec 2023-07-11 18:17:10,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-11 18:17:10,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-11 18:17:10,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:10,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:10,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:10,628 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:10,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-11 18:17:10,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 18:17:10,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-11 18:17:10,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:10,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-11 18:17:10,631 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 18:17:10,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:10,632 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:10,633 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-11 18:17:10,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-11 18:17:10,636 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-11 18:17:10,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:10,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:10,639 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 18:17:10,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:10,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:10,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:10,658 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45471] to rsgroup normal 2023-07-11 18:17:10,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-11 18:17:10,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-11 18:17:10,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:10,662 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:10,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 18:17:10,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-11 18:17:10,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,45471,1689099407428] are moved back to default 2023-07-11 18:17:10,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-11 18:17:10,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:10,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:10,669 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:10,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-11 18:17:10,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:10,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 18:17:10,674 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-11 18:17:10,676 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 18:17:10,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 123 2023-07-11 18:17:10,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-11 18:17:10,678 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-11 18:17:10,679 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-11 18:17:10,679 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:10,679 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:10,680 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 18:17:10,682 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 18:17:10,683 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/unmovedTable/dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:10,684 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/unmovedTable/dcd96e07bbafe36befea35835c2da151 empty. 2023-07-11 18:17:10,685 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/unmovedTable/dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:10,685 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-11 18:17:10,699 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-11 18:17:10,700 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => dcd96e07bbafe36befea35835c2da151, NAME => 'unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:17:10,718 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:10,718 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing dcd96e07bbafe36befea35835c2da151, disabling compactions & flushes 2023-07-11 18:17:10,718 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:10,718 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:10,718 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. after waiting 0 ms 2023-07-11 18:17:10,719 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:10,719 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:10,719 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for dcd96e07bbafe36befea35835c2da151: 2023-07-11 18:17:10,721 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 18:17:10,722 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689099430722"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099430722"}]},"ts":"1689099430722"} 2023-07-11 18:17:10,723 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 18:17:10,724 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 18:17:10,724 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099430724"}]},"ts":"1689099430724"} 2023-07-11 18:17:10,725 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-11 18:17:10,728 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=dcd96e07bbafe36befea35835c2da151, ASSIGN}] 2023-07-11 18:17:10,730 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=dcd96e07bbafe36befea35835c2da151, ASSIGN 2023-07-11 18:17:10,731 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=124, ppid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=dcd96e07bbafe36befea35835c2da151, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45821,1689099407865; forceNewPlan=false, retain=false 2023-07-11 18:17:10,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-11 18:17:10,883 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=dcd96e07bbafe36befea35835c2da151, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:10,883 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689099430883"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099430883"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099430883"}]},"ts":"1689099430883"} 2023-07-11 18:17:10,884 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=124, state=RUNNABLE; OpenRegionProcedure dcd96e07bbafe36befea35835c2da151, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:17:10,906 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-11 18:17:10,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-11 18:17:11,048 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:11,048 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dcd96e07bbafe36befea35835c2da151, NAME => 'unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:11,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:11,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:11,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:11,049 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:11,054 INFO [StoreOpener-dcd96e07bbafe36befea35835c2da151-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:11,057 DEBUG [StoreOpener-dcd96e07bbafe36befea35835c2da151-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/unmovedTable/dcd96e07bbafe36befea35835c2da151/ut 2023-07-11 18:17:11,057 DEBUG [StoreOpener-dcd96e07bbafe36befea35835c2da151-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/unmovedTable/dcd96e07bbafe36befea35835c2da151/ut 2023-07-11 18:17:11,057 INFO [StoreOpener-dcd96e07bbafe36befea35835c2da151-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dcd96e07bbafe36befea35835c2da151 columnFamilyName ut 2023-07-11 18:17:11,058 INFO [StoreOpener-dcd96e07bbafe36befea35835c2da151-1] regionserver.HStore(310): Store=dcd96e07bbafe36befea35835c2da151/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:11,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/unmovedTable/dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:11,059 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/unmovedTable/dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:11,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:11,068 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/unmovedTable/dcd96e07bbafe36befea35835c2da151/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:11,069 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dcd96e07bbafe36befea35835c2da151; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11779966400, jitterRate=0.09709486365318298}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:11,069 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dcd96e07bbafe36befea35835c2da151: 2023-07-11 18:17:11,071 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151., pid=125, masterSystemTime=1689099431036 2023-07-11 18:17:11,074 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=124 updating hbase:meta row=dcd96e07bbafe36befea35835c2da151, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:11,074 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689099431074"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099431074"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099431074"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099431074"}]},"ts":"1689099431074"} 2023-07-11 18:17:11,075 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:11,075 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:11,079 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=124 2023-07-11 18:17:11,079 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=124, state=SUCCESS; OpenRegionProcedure dcd96e07bbafe36befea35835c2da151, server=jenkins-hbase4.apache.org,45821,1689099407865 in 192 msec 2023-07-11 18:17:11,082 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-11 18:17:11,082 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=dcd96e07bbafe36befea35835c2da151, ASSIGN in 351 msec 2023-07-11 18:17:11,084 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 18:17:11,084 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099431084"}]},"ts":"1689099431084"} 2023-07-11 18:17:11,086 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-11 18:17:11,089 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=123, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 18:17:11,091 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; CreateTableProcedure table=unmovedTable in 416 msec 2023-07-11 18:17:11,280 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=123 2023-07-11 18:17:11,280 INFO [Listener at localhost/35107] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 123 completed 2023-07-11 18:17:11,280 DEBUG [Listener at localhost/35107] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-11 18:17:11,281 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:11,284 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-11 18:17:11,284 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:11,284 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-11 18:17:11,286 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-11 18:17:11,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-11 18:17:11,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-11 18:17:11,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:11,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:11,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 18:17:11,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-11 18:17:11,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(345): Moving region dcd96e07bbafe36befea35835c2da151 to RSGroup normal 2023-07-11 18:17:11,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=dcd96e07bbafe36befea35835c2da151, REOPEN/MOVE 2023-07-11 18:17:11,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-11 18:17:11,294 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=dcd96e07bbafe36befea35835c2da151, REOPEN/MOVE 2023-07-11 18:17:11,295 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=dcd96e07bbafe36befea35835c2da151, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:11,295 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689099431294"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099431294"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099431294"}]},"ts":"1689099431294"} 2023-07-11 18:17:11,296 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure dcd96e07bbafe36befea35835c2da151, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:17:11,449 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:11,450 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dcd96e07bbafe36befea35835c2da151, disabling compactions & flushes 2023-07-11 18:17:11,450 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:11,450 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:11,450 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. after waiting 0 ms 2023-07-11 18:17:11,451 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:11,456 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/unmovedTable/dcd96e07bbafe36befea35835c2da151/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:17:11,456 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:11,457 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dcd96e07bbafe36befea35835c2da151: 2023-07-11 18:17:11,457 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding dcd96e07bbafe36befea35835c2da151 move to jenkins-hbase4.apache.org,45471,1689099407428 record at close sequenceid=2 2023-07-11 18:17:11,459 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:11,459 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=dcd96e07bbafe36befea35835c2da151, regionState=CLOSED 2023-07-11 18:17:11,460 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689099431459"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099431459"}]},"ts":"1689099431459"} 2023-07-11 18:17:11,463 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-11 18:17:11,464 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure dcd96e07bbafe36befea35835c2da151, server=jenkins-hbase4.apache.org,45821,1689099407865 in 165 msec 2023-07-11 18:17:11,464 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=dcd96e07bbafe36befea35835c2da151, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45471,1689099407428; forceNewPlan=false, retain=false 2023-07-11 18:17:11,615 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=dcd96e07bbafe36befea35835c2da151, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:11,615 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689099431615"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099431615"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099431615"}]},"ts":"1689099431615"} 2023-07-11 18:17:11,617 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure dcd96e07bbafe36befea35835c2da151, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:17:11,773 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:11,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dcd96e07bbafe36befea35835c2da151, NAME => 'unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:11,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:11,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:11,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:11,773 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:11,776 INFO [StoreOpener-dcd96e07bbafe36befea35835c2da151-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:11,777 DEBUG [StoreOpener-dcd96e07bbafe36befea35835c2da151-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/unmovedTable/dcd96e07bbafe36befea35835c2da151/ut 2023-07-11 18:17:11,777 DEBUG [StoreOpener-dcd96e07bbafe36befea35835c2da151-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/unmovedTable/dcd96e07bbafe36befea35835c2da151/ut 2023-07-11 18:17:11,778 INFO [StoreOpener-dcd96e07bbafe36befea35835c2da151-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dcd96e07bbafe36befea35835c2da151 columnFamilyName ut 2023-07-11 18:17:11,778 INFO [StoreOpener-dcd96e07bbafe36befea35835c2da151-1] regionserver.HStore(310): Store=dcd96e07bbafe36befea35835c2da151/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:11,779 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/unmovedTable/dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:11,781 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/unmovedTable/dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:11,784 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:11,785 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dcd96e07bbafe36befea35835c2da151; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11732168160, jitterRate=0.09264330565929413}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:11,785 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dcd96e07bbafe36befea35835c2da151: 2023-07-11 18:17:11,786 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151., pid=128, masterSystemTime=1689099431769 2023-07-11 18:17:11,788 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:11,788 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:11,789 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=dcd96e07bbafe36befea35835c2da151, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:11,789 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689099431789"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099431789"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099431789"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099431789"}]},"ts":"1689099431789"} 2023-07-11 18:17:11,794 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-11 18:17:11,794 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure dcd96e07bbafe36befea35835c2da151, server=jenkins-hbase4.apache.org,45471,1689099407428 in 175 msec 2023-07-11 18:17:11,795 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=dcd96e07bbafe36befea35835c2da151, REOPEN/MOVE in 501 msec 2023-07-11 18:17:12,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-11 18:17:12,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-11 18:17:12,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:12,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:12,298 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:12,300 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:12,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-11 18:17:12,301 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 18:17:12,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-11 18:17:12,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:12,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-11 18:17:12,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 18:17:12,303 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-11 18:17:12,305 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-11 18:17:12,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:12,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:12,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-11 18:17:12,308 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-11 18:17:12,309 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-11 18:17:12,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:12,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:12,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-11 18:17:12,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:12,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-11 18:17:12,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 18:17:12,316 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-11 18:17:12,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 18:17:12,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:12,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:12,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-11 18:17:12,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-11 18:17:12,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:12,324 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:12,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-11 18:17:12,325 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 18:17:12,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-11 18:17:12,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(345): Moving region dcd96e07bbafe36befea35835c2da151 to RSGroup default 2023-07-11 18:17:12,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=dcd96e07bbafe36befea35835c2da151, REOPEN/MOVE 2023-07-11 18:17:12,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-11 18:17:12,333 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=dcd96e07bbafe36befea35835c2da151, REOPEN/MOVE 2023-07-11 18:17:12,334 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=dcd96e07bbafe36befea35835c2da151, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:12,334 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689099432334"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099432334"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099432334"}]},"ts":"1689099432334"} 2023-07-11 18:17:12,336 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure dcd96e07bbafe36befea35835c2da151, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:17:12,489 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:12,490 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dcd96e07bbafe36befea35835c2da151, disabling compactions & flushes 2023-07-11 18:17:12,490 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:12,490 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:12,490 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. after waiting 0 ms 2023-07-11 18:17:12,490 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:12,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/unmovedTable/dcd96e07bbafe36befea35835c2da151/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 18:17:12,496 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:12,496 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dcd96e07bbafe36befea35835c2da151: 2023-07-11 18:17:12,496 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding dcd96e07bbafe36befea35835c2da151 move to jenkins-hbase4.apache.org,45821,1689099407865 record at close sequenceid=5 2023-07-11 18:17:12,500 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:12,500 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=dcd96e07bbafe36befea35835c2da151, regionState=CLOSED 2023-07-11 18:17:12,500 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689099432500"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099432500"}]},"ts":"1689099432500"} 2023-07-11 18:17:12,503 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-11 18:17:12,503 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure dcd96e07bbafe36befea35835c2da151, server=jenkins-hbase4.apache.org,45471,1689099407428 in 165 msec 2023-07-11 18:17:12,504 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=dcd96e07bbafe36befea35835c2da151, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45821,1689099407865; forceNewPlan=false, retain=false 2023-07-11 18:17:12,655 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=dcd96e07bbafe36befea35835c2da151, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:12,655 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689099432655"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099432655"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099432655"}]},"ts":"1689099432655"} 2023-07-11 18:17:12,658 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure dcd96e07bbafe36befea35835c2da151, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:17:12,819 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:12,819 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dcd96e07bbafe36befea35835c2da151, NAME => 'unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:12,820 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:12,820 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:12,820 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:12,820 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:12,821 INFO [StoreOpener-dcd96e07bbafe36befea35835c2da151-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:12,823 DEBUG [StoreOpener-dcd96e07bbafe36befea35835c2da151-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/unmovedTable/dcd96e07bbafe36befea35835c2da151/ut 2023-07-11 18:17:12,823 DEBUG [StoreOpener-dcd96e07bbafe36befea35835c2da151-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/unmovedTable/dcd96e07bbafe36befea35835c2da151/ut 2023-07-11 18:17:12,823 INFO [StoreOpener-dcd96e07bbafe36befea35835c2da151-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dcd96e07bbafe36befea35835c2da151 columnFamilyName ut 2023-07-11 18:17:12,824 INFO [StoreOpener-dcd96e07bbafe36befea35835c2da151-1] regionserver.HStore(310): Store=dcd96e07bbafe36befea35835c2da151/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:12,825 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/unmovedTable/dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:12,826 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/unmovedTable/dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:12,830 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:12,831 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dcd96e07bbafe36befea35835c2da151; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11063398560, jitterRate=0.030359283089637756}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:12,831 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dcd96e07bbafe36befea35835c2da151: 2023-07-11 18:17:12,832 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151., pid=131, masterSystemTime=1689099432814 2023-07-11 18:17:12,834 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:12,834 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:12,834 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=dcd96e07bbafe36befea35835c2da151, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:12,834 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689099432834"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099432834"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099432834"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099432834"}]},"ts":"1689099432834"} 2023-07-11 18:17:12,838 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-11 18:17:12,838 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure dcd96e07bbafe36befea35835c2da151, server=jenkins-hbase4.apache.org,45821,1689099407865 in 178 msec 2023-07-11 18:17:12,839 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=dcd96e07bbafe36befea35835c2da151, REOPEN/MOVE in 506 msec 2023-07-11 18:17:13,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-11 18:17:13,333 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-11 18:17:13,334 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:13,335 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45471] to rsgroup default 2023-07-11 18:17:13,337 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-11 18:17:13,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:13,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:13,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-11 18:17:13,339 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 18:17:13,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-11 18:17:13,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,45471,1689099407428] are moved back to normal 2023-07-11 18:17:13,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-11 18:17:13,341 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:13,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-11 18:17:13,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:13,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:13,346 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-11 18:17:13,347 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-11 18:17:13,348 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:13,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:13,349 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:13,349 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:13,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:13,350 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:13,351 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:13,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:13,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-11 18:17:13,355 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-11 18:17:13,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:13,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-11 18:17:13,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:13,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-11 18:17:13,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:13,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-11 18:17:13,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(345): Moving region 4c6a8d44fa4722459b1960c3bec455c9 to RSGroup default 2023-07-11 18:17:13,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=4c6a8d44fa4722459b1960c3bec455c9, REOPEN/MOVE 2023-07-11 18:17:13,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-11 18:17:13,363 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=4c6a8d44fa4722459b1960c3bec455c9, REOPEN/MOVE 2023-07-11 18:17:13,363 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=4c6a8d44fa4722459b1960c3bec455c9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:17:13,363 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689099433363"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099433363"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099433363"}]},"ts":"1689099433363"} 2023-07-11 18:17:13,364 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE; CloseRegionProcedure 4c6a8d44fa4722459b1960c3bec455c9, server=jenkins-hbase4.apache.org,37889,1689099411612}] 2023-07-11 18:17:13,517 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:13,519 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4c6a8d44fa4722459b1960c3bec455c9, disabling compactions & flushes 2023-07-11 18:17:13,519 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:13,519 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:13,519 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. after waiting 0 ms 2023-07-11 18:17:13,519 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:13,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/testRename/4c6a8d44fa4722459b1960c3bec455c9/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-11 18:17:13,525 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:13,526 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4c6a8d44fa4722459b1960c3bec455c9: 2023-07-11 18:17:13,526 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 4c6a8d44fa4722459b1960c3bec455c9 move to jenkins-hbase4.apache.org,45471,1689099407428 record at close sequenceid=5 2023-07-11 18:17:13,527 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:13,528 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=4c6a8d44fa4722459b1960c3bec455c9, regionState=CLOSED 2023-07-11 18:17:13,528 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689099433528"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099433528"}]},"ts":"1689099433528"} 2023-07-11 18:17:13,531 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=132 2023-07-11 18:17:13,531 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; CloseRegionProcedure 4c6a8d44fa4722459b1960c3bec455c9, server=jenkins-hbase4.apache.org,37889,1689099411612 in 165 msec 2023-07-11 18:17:13,531 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=4c6a8d44fa4722459b1960c3bec455c9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45471,1689099407428; forceNewPlan=false, retain=false 2023-07-11 18:17:13,682 INFO [jenkins-hbase4:45397] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 18:17:13,682 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=4c6a8d44fa4722459b1960c3bec455c9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:13,682 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689099433682"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099433682"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099433682"}]},"ts":"1689099433682"} 2023-07-11 18:17:13,684 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=134, ppid=132, state=RUNNABLE; OpenRegionProcedure 4c6a8d44fa4722459b1960c3bec455c9, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:17:13,839 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-11 18:17:13,840 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:13,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4c6a8d44fa4722459b1960c3bec455c9, NAME => 'testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:13,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:13,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:13,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:13,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:13,842 INFO [StoreOpener-4c6a8d44fa4722459b1960c3bec455c9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:13,843 DEBUG [StoreOpener-4c6a8d44fa4722459b1960c3bec455c9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/testRename/4c6a8d44fa4722459b1960c3bec455c9/tr 2023-07-11 18:17:13,843 DEBUG [StoreOpener-4c6a8d44fa4722459b1960c3bec455c9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/testRename/4c6a8d44fa4722459b1960c3bec455c9/tr 2023-07-11 18:17:13,843 INFO [StoreOpener-4c6a8d44fa4722459b1960c3bec455c9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4c6a8d44fa4722459b1960c3bec455c9 columnFamilyName tr 2023-07-11 18:17:13,844 INFO [StoreOpener-4c6a8d44fa4722459b1960c3bec455c9-1] regionserver.HStore(310): Store=4c6a8d44fa4722459b1960c3bec455c9/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:13,845 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/testRename/4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:13,846 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/testRename/4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:13,849 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:13,850 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4c6a8d44fa4722459b1960c3bec455c9; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10615713600, jitterRate=-0.011334627866744995}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:13,850 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4c6a8d44fa4722459b1960c3bec455c9: 2023-07-11 18:17:13,851 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9., pid=134, masterSystemTime=1689099433836 2023-07-11 18:17:13,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:13,853 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:13,853 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=4c6a8d44fa4722459b1960c3bec455c9, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:13,853 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689099433853"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099433853"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099433853"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099433853"}]},"ts":"1689099433853"} 2023-07-11 18:17:13,857 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-11 18:17:13,858 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; OpenRegionProcedure 4c6a8d44fa4722459b1960c3bec455c9, server=jenkins-hbase4.apache.org,45471,1689099407428 in 171 msec 2023-07-11 18:17:13,859 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=4c6a8d44fa4722459b1960c3bec455c9, REOPEN/MOVE in 496 msec 2023-07-11 18:17:14,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure.ProcedureSyncWait(216): waitFor pid=132 2023-07-11 18:17:14,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-11 18:17:14,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:14,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:37773] to rsgroup default 2023-07-11 18:17:14,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:14,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-11 18:17:14,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:14,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-11 18:17:14,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37773,1689099407650, jenkins-hbase4.apache.org,37889,1689099411612] are moved back to newgroup 2023-07-11 18:17:14,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-11 18:17:14,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:14,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-11 18:17:14,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:14,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:14,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:14,382 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:14,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:14,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:14,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:14,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:14,389 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:14,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:14,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:14,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45397] to rsgroup master 2023-07-11 18:17:14,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:14,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 765 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57346 deadline: 1689100634394, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. 2023-07-11 18:17:14,394 WARN [Listener at localhost/35107] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:14,396 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:14,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:14,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:14,397 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37773, jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:45821], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:14,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:14,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:14,417 INFO [Listener at localhost/35107] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=517 (was 525), OpenFileDescriptor=800 (was 823), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=563 (was 603), ProcessCount=170 (was 172), AvailableMemoryMB=4293 (was 2124) - AvailableMemoryMB LEAK? - 2023-07-11 18:17:14,417 WARN [Listener at localhost/35107] hbase.ResourceChecker(130): Thread=517 is superior to 500 2023-07-11 18:17:14,434 INFO [Listener at localhost/35107] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=517, OpenFileDescriptor=800, MaxFileDescriptor=60000, SystemLoadAverage=563, ProcessCount=170, AvailableMemoryMB=4293 2023-07-11 18:17:14,434 WARN [Listener at localhost/35107] hbase.ResourceChecker(130): Thread=517 is superior to 500 2023-07-11 18:17:14,434 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-11 18:17:14,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:14,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:14,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:14,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:14,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:14,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:14,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:14,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:14,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:14,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:14,445 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:14,447 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:14,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:14,449 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:14,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:14,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:14,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:14,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:14,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:14,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45397] to rsgroup master 2023-07-11 18:17:14,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:14,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 793 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57346 deadline: 1689100634457, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. 2023-07-11 18:17:14,458 WARN [Listener at localhost/35107] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:14,459 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:14,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:14,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:14,460 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37773, jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:45821], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:14,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:14,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:14,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-11 18:17:14,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 18:17:14,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-11 18:17:14,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-11 18:17:14,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-11 18:17:14,468 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:14,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-11 18:17:14,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:14,469 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 805 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:57346 deadline: 1689100634469, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-11 18:17:14,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-11 18:17:14,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:14,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 808 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:57346 deadline: 1689100634471, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-11 18:17:14,474 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-11 18:17:14,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-11 18:17:14,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-11 18:17:14,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:14,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 812 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:57346 deadline: 1689100634481, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-11 18:17:14,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:14,488 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:14,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:14,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:14,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:14,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:14,490 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:14,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:14,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:14,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:14,495 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:14,497 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:14,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:14,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:14,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:14,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:14,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:14,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:14,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:14,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45397] to rsgroup master 2023-07-11 18:17:14,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:14,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 836 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57346 deadline: 1689100634508, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. 2023-07-11 18:17:14,512 WARN [Listener at localhost/35107] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:14,513 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:14,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:14,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:14,515 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37773, jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:45821], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:14,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:14,515 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:14,538 INFO [Listener at localhost/35107] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=521 (was 517) Potentially hanging thread: hconnection-0x51dbb1fe-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2dc08bfd-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x51dbb1fe-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2dc08bfd-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=800 (was 800), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=563 (was 563), ProcessCount=170 (was 170), AvailableMemoryMB=4293 (was 4293) 2023-07-11 18:17:14,538 WARN [Listener at localhost/35107] hbase.ResourceChecker(130): Thread=521 is superior to 500 2023-07-11 18:17:14,557 INFO [Listener at localhost/35107] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=521, OpenFileDescriptor=800, MaxFileDescriptor=60000, SystemLoadAverage=563, ProcessCount=170, AvailableMemoryMB=4292 2023-07-11 18:17:14,557 WARN [Listener at localhost/35107] hbase.ResourceChecker(130): Thread=521 is superior to 500 2023-07-11 18:17:14,557 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-11 18:17:14,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:14,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:14,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:14,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:14,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:14,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:14,563 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:14,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:14,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:14,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:14,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:14,571 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:14,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:14,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:14,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:14,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:14,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:14,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:14,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:14,586 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45397] to rsgroup master 2023-07-11 18:17:14,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:14,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 864 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57346 deadline: 1689100634586, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. 2023-07-11 18:17:14,587 WARN [Listener at localhost/35107] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:14,588 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:14,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:14,589 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:14,589 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37773, jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:45821], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:14,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:14,590 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:14,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:14,591 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:14,592 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_1161086009 2023-07-11 18:17:14,594 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1161086009 2023-07-11 18:17:14,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:14,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:14,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:14,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:14,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:14,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:14,604 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:37773] to rsgroup Group_testDisabledTableMove_1161086009 2023-07-11 18:17:14,607 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:14,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1161086009 2023-07-11 18:17:14,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:14,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:14,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-11 18:17:14,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37773,1689099407650, jenkins-hbase4.apache.org,37889,1689099411612] are moved back to default 2023-07-11 18:17:14,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1161086009 2023-07-11 18:17:14,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:14,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:14,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:14,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1161086009 2023-07-11 18:17:14,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:14,618 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 18:17:14,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-11 18:17:14,621 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 18:17:14,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 135 2023-07-11 18:17:14,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-11 18:17:14,623 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:14,623 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1161086009 2023-07-11 18:17:14,624 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:14,624 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:14,626 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 18:17:14,630 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/f6857f11cdd5f55c00ab2ea8e6b26b40 2023-07-11 18:17:14,630 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/da064b5e4470067992b17a8791e8344b 2023-07-11 18:17:14,630 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/5fe4277e260d383df66e709c1a38f479 2023-07-11 18:17:14,630 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/2c4b4ff55648d797684f4a45b99d77d7 2023-07-11 18:17:14,630 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/0f8b033235a4159df704c1fd3ca34f19 2023-07-11 18:17:14,631 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/da064b5e4470067992b17a8791e8344b empty. 2023-07-11 18:17:14,631 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/f6857f11cdd5f55c00ab2ea8e6b26b40 empty. 2023-07-11 18:17:14,631 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/0f8b033235a4159df704c1fd3ca34f19 empty. 2023-07-11 18:17:14,631 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/2c4b4ff55648d797684f4a45b99d77d7 empty. 2023-07-11 18:17:14,632 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/5fe4277e260d383df66e709c1a38f479 empty. 2023-07-11 18:17:14,632 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/f6857f11cdd5f55c00ab2ea8e6b26b40 2023-07-11 18:17:14,632 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/da064b5e4470067992b17a8791e8344b 2023-07-11 18:17:14,632 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/2c4b4ff55648d797684f4a45b99d77d7 2023-07-11 18:17:14,632 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/0f8b033235a4159df704c1fd3ca34f19 2023-07-11 18:17:14,632 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/5fe4277e260d383df66e709c1a38f479 2023-07-11 18:17:14,632 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-11 18:17:14,648 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-11 18:17:14,651 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 5fe4277e260d383df66e709c1a38f479, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:17:14,651 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 0f8b033235a4159df704c1fd3ca34f19, NAME => 'Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:17:14,651 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => f6857f11cdd5f55c00ab2ea8e6b26b40, NAME => 'Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:17:14,672 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:14,673 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 5fe4277e260d383df66e709c1a38f479, disabling compactions & flushes 2023-07-11 18:17:14,673 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479. 2023-07-11 18:17:14,673 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479. 2023-07-11 18:17:14,673 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479. after waiting 0 ms 2023-07-11 18:17:14,673 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479. 2023-07-11 18:17:14,673 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479. 2023-07-11 18:17:14,673 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 5fe4277e260d383df66e709c1a38f479: 2023-07-11 18:17:14,673 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 2c4b4ff55648d797684f4a45b99d77d7, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:17:14,675 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:14,675 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 0f8b033235a4159df704c1fd3ca34f19, disabling compactions & flushes 2023-07-11 18:17:14,675 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19. 2023-07-11 18:17:14,675 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19. 2023-07-11 18:17:14,675 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19. after waiting 0 ms 2023-07-11 18:17:14,675 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19. 2023-07-11 18:17:14,675 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19. 2023-07-11 18:17:14,675 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 0f8b033235a4159df704c1fd3ca34f19: 2023-07-11 18:17:14,676 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => da064b5e4470067992b17a8791e8344b, NAME => 'Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp 2023-07-11 18:17:14,679 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:14,679 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing f6857f11cdd5f55c00ab2ea8e6b26b40, disabling compactions & flushes 2023-07-11 18:17:14,679 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40. 2023-07-11 18:17:14,679 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40. 2023-07-11 18:17:14,679 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40. after waiting 0 ms 2023-07-11 18:17:14,679 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40. 2023-07-11 18:17:14,679 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40. 2023-07-11 18:17:14,679 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for f6857f11cdd5f55c00ab2ea8e6b26b40: 2023-07-11 18:17:14,690 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:14,690 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 2c4b4ff55648d797684f4a45b99d77d7, disabling compactions & flushes 2023-07-11 18:17:14,690 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7. 2023-07-11 18:17:14,690 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7. 2023-07-11 18:17:14,690 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7. after waiting 0 ms 2023-07-11 18:17:14,690 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7. 2023-07-11 18:17:14,690 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7. 2023-07-11 18:17:14,690 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 2c4b4ff55648d797684f4a45b99d77d7: 2023-07-11 18:17:14,691 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:14,692 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing da064b5e4470067992b17a8791e8344b, disabling compactions & flushes 2023-07-11 18:17:14,692 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b. 2023-07-11 18:17:14,692 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b. 2023-07-11 18:17:14,692 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b. after waiting 0 ms 2023-07-11 18:17:14,692 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b. 2023-07-11 18:17:14,692 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b. 2023-07-11 18:17:14,692 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for da064b5e4470067992b17a8791e8344b: 2023-07-11 18:17:14,694 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 18:17:14,695 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689099434695"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099434695"}]},"ts":"1689099434695"} 2023-07-11 18:17:14,695 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689099434695"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099434695"}]},"ts":"1689099434695"} 2023-07-11 18:17:14,695 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689099434695"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099434695"}]},"ts":"1689099434695"} 2023-07-11 18:17:14,695 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689099434695"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099434695"}]},"ts":"1689099434695"} 2023-07-11 18:17:14,695 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689099434695"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099434695"}]},"ts":"1689099434695"} 2023-07-11 18:17:14,697 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-11 18:17:14,698 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 18:17:14,698 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099434698"}]},"ts":"1689099434698"} 2023-07-11 18:17:14,699 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-11 18:17:14,703 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:17:14,703 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:17:14,703 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:17:14,703 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:17:14,703 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f6857f11cdd5f55c00ab2ea8e6b26b40, ASSIGN}, {pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f8b033235a4159df704c1fd3ca34f19, ASSIGN}, {pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5fe4277e260d383df66e709c1a38f479, ASSIGN}, {pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2c4b4ff55648d797684f4a45b99d77d7, ASSIGN}, {pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=da064b5e4470067992b17a8791e8344b, ASSIGN}] 2023-07-11 18:17:14,706 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f6857f11cdd5f55c00ab2ea8e6b26b40, ASSIGN 2023-07-11 18:17:14,706 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f8b033235a4159df704c1fd3ca34f19, ASSIGN 2023-07-11 18:17:14,706 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5fe4277e260d383df66e709c1a38f479, ASSIGN 2023-07-11 18:17:14,706 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2c4b4ff55648d797684f4a45b99d77d7, ASSIGN 2023-07-11 18:17:14,706 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f6857f11cdd5f55c00ab2ea8e6b26b40, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45471,1689099407428; forceNewPlan=false, retain=false 2023-07-11 18:17:14,707 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f8b033235a4159df704c1fd3ca34f19, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45821,1689099407865; forceNewPlan=false, retain=false 2023-07-11 18:17:14,707 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=138, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5fe4277e260d383df66e709c1a38f479, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45471,1689099407428; forceNewPlan=false, retain=false 2023-07-11 18:17:14,707 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=139, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2c4b4ff55648d797684f4a45b99d77d7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45821,1689099407865; forceNewPlan=false, retain=false 2023-07-11 18:17:14,708 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=da064b5e4470067992b17a8791e8344b, ASSIGN 2023-07-11 18:17:14,711 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=140, ppid=135, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=da064b5e4470067992b17a8791e8344b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45471,1689099407428; forceNewPlan=false, retain=false 2023-07-11 18:17:14,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-11 18:17:14,857 INFO [jenkins-hbase4:45397] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-11 18:17:14,860 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=da064b5e4470067992b17a8791e8344b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:14,860 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=f6857f11cdd5f55c00ab2ea8e6b26b40, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:14,860 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=5fe4277e260d383df66e709c1a38f479, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:14,861 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689099434860"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099434860"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099434860"}]},"ts":"1689099434860"} 2023-07-11 18:17:14,860 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=0f8b033235a4159df704c1fd3ca34f19, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:14,860 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=2c4b4ff55648d797684f4a45b99d77d7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:14,861 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689099434860"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099434860"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099434860"}]},"ts":"1689099434860"} 2023-07-11 18:17:14,861 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689099434860"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099434860"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099434860"}]},"ts":"1689099434860"} 2023-07-11 18:17:14,861 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689099434860"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099434860"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099434860"}]},"ts":"1689099434860"} 2023-07-11 18:17:14,861 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689099434860"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099434860"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099434860"}]},"ts":"1689099434860"} 2023-07-11 18:17:14,862 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=136, state=RUNNABLE; OpenRegionProcedure f6857f11cdd5f55c00ab2ea8e6b26b40, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:17:14,863 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=137, state=RUNNABLE; OpenRegionProcedure 0f8b033235a4159df704c1fd3ca34f19, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:17:14,863 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=143, ppid=139, state=RUNNABLE; OpenRegionProcedure 2c4b4ff55648d797684f4a45b99d77d7, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:17:14,864 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=138, state=RUNNABLE; OpenRegionProcedure 5fe4277e260d383df66e709c1a38f479, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:17:14,868 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=145, ppid=140, state=RUNNABLE; OpenRegionProcedure da064b5e4470067992b17a8791e8344b, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:17:14,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-11 18:17:15,019 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19. 2023-07-11 18:17:15,019 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40. 2023-07-11 18:17:15,019 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0f8b033235a4159df704c1fd3ca34f19, NAME => 'Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-11 18:17:15,019 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f6857f11cdd5f55c00ab2ea8e6b26b40, NAME => 'Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-11 18:17:15,019 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 0f8b033235a4159df704c1fd3ca34f19 2023-07-11 18:17:15,019 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove f6857f11cdd5f55c00ab2ea8e6b26b40 2023-07-11 18:17:15,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:15,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:15,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0f8b033235a4159df704c1fd3ca34f19 2023-07-11 18:17:15,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f6857f11cdd5f55c00ab2ea8e6b26b40 2023-07-11 18:17:15,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0f8b033235a4159df704c1fd3ca34f19 2023-07-11 18:17:15,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f6857f11cdd5f55c00ab2ea8e6b26b40 2023-07-11 18:17:15,021 INFO [StoreOpener-f6857f11cdd5f55c00ab2ea8e6b26b40-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f6857f11cdd5f55c00ab2ea8e6b26b40 2023-07-11 18:17:15,021 INFO [StoreOpener-0f8b033235a4159df704c1fd3ca34f19-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0f8b033235a4159df704c1fd3ca34f19 2023-07-11 18:17:15,023 DEBUG [StoreOpener-f6857f11cdd5f55c00ab2ea8e6b26b40-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/f6857f11cdd5f55c00ab2ea8e6b26b40/f 2023-07-11 18:17:15,023 DEBUG [StoreOpener-f6857f11cdd5f55c00ab2ea8e6b26b40-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/f6857f11cdd5f55c00ab2ea8e6b26b40/f 2023-07-11 18:17:15,023 DEBUG [StoreOpener-0f8b033235a4159df704c1fd3ca34f19-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/0f8b033235a4159df704c1fd3ca34f19/f 2023-07-11 18:17:15,023 DEBUG [StoreOpener-0f8b033235a4159df704c1fd3ca34f19-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/0f8b033235a4159df704c1fd3ca34f19/f 2023-07-11 18:17:15,023 INFO [StoreOpener-f6857f11cdd5f55c00ab2ea8e6b26b40-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f6857f11cdd5f55c00ab2ea8e6b26b40 columnFamilyName f 2023-07-11 18:17:15,023 INFO [StoreOpener-0f8b033235a4159df704c1fd3ca34f19-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0f8b033235a4159df704c1fd3ca34f19 columnFamilyName f 2023-07-11 18:17:15,024 INFO [StoreOpener-f6857f11cdd5f55c00ab2ea8e6b26b40-1] regionserver.HStore(310): Store=f6857f11cdd5f55c00ab2ea8e6b26b40/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:15,024 INFO [StoreOpener-0f8b033235a4159df704c1fd3ca34f19-1] regionserver.HStore(310): Store=0f8b033235a4159df704c1fd3ca34f19/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:15,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/f6857f11cdd5f55c00ab2ea8e6b26b40 2023-07-11 18:17:15,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/0f8b033235a4159df704c1fd3ca34f19 2023-07-11 18:17:15,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/f6857f11cdd5f55c00ab2ea8e6b26b40 2023-07-11 18:17:15,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/0f8b033235a4159df704c1fd3ca34f19 2023-07-11 18:17:15,028 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f6857f11cdd5f55c00ab2ea8e6b26b40 2023-07-11 18:17:15,029 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0f8b033235a4159df704c1fd3ca34f19 2023-07-11 18:17:15,033 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/f6857f11cdd5f55c00ab2ea8e6b26b40/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:15,033 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/0f8b033235a4159df704c1fd3ca34f19/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:15,033 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f6857f11cdd5f55c00ab2ea8e6b26b40; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11369719360, jitterRate=0.058887630701065063}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:15,033 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f6857f11cdd5f55c00ab2ea8e6b26b40: 2023-07-11 18:17:15,034 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0f8b033235a4159df704c1fd3ca34f19; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10259456160, jitterRate=-0.04451368749141693}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:15,034 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0f8b033235a4159df704c1fd3ca34f19: 2023-07-11 18:17:15,034 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40., pid=141, masterSystemTime=1689099435015 2023-07-11 18:17:15,035 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19., pid=142, masterSystemTime=1689099435016 2023-07-11 18:17:15,036 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40. 2023-07-11 18:17:15,036 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40. 2023-07-11 18:17:15,036 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479. 2023-07-11 18:17:15,036 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5fe4277e260d383df66e709c1a38f479, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-11 18:17:15,036 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 5fe4277e260d383df66e709c1a38f479 2023-07-11 18:17:15,036 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:15,037 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5fe4277e260d383df66e709c1a38f479 2023-07-11 18:17:15,037 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5fe4277e260d383df66e709c1a38f479 2023-07-11 18:17:15,037 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=f6857f11cdd5f55c00ab2ea8e6b26b40, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:15,037 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689099435037"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099435037"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099435037"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099435037"}]},"ts":"1689099435037"} 2023-07-11 18:17:15,039 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=0f8b033235a4159df704c1fd3ca34f19, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:15,039 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19. 2023-07-11 18:17:15,039 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19. 2023-07-11 18:17:15,039 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689099435039"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099435039"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099435039"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099435039"}]},"ts":"1689099435039"} 2023-07-11 18:17:15,039 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7. 2023-07-11 18:17:15,039 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2c4b4ff55648d797684f4a45b99d77d7, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-11 18:17:15,039 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 2c4b4ff55648d797684f4a45b99d77d7 2023-07-11 18:17:15,039 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:15,040 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2c4b4ff55648d797684f4a45b99d77d7 2023-07-11 18:17:15,040 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2c4b4ff55648d797684f4a45b99d77d7 2023-07-11 18:17:15,046 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=136 2023-07-11 18:17:15,046 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=136, state=SUCCESS; OpenRegionProcedure f6857f11cdd5f55c00ab2ea8e6b26b40, server=jenkins-hbase4.apache.org,45471,1689099407428 in 177 msec 2023-07-11 18:17:15,047 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=137 2023-07-11 18:17:15,047 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=137, state=SUCCESS; OpenRegionProcedure 0f8b033235a4159df704c1fd3ca34f19, server=jenkins-hbase4.apache.org,45821,1689099407865 in 177 msec 2023-07-11 18:17:15,048 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f6857f11cdd5f55c00ab2ea8e6b26b40, ASSIGN in 343 msec 2023-07-11 18:17:15,048 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f8b033235a4159df704c1fd3ca34f19, ASSIGN in 344 msec 2023-07-11 18:17:15,051 INFO [StoreOpener-2c4b4ff55648d797684f4a45b99d77d7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2c4b4ff55648d797684f4a45b99d77d7 2023-07-11 18:17:15,051 INFO [StoreOpener-5fe4277e260d383df66e709c1a38f479-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5fe4277e260d383df66e709c1a38f479 2023-07-11 18:17:15,052 DEBUG [StoreOpener-2c4b4ff55648d797684f4a45b99d77d7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/2c4b4ff55648d797684f4a45b99d77d7/f 2023-07-11 18:17:15,052 DEBUG [StoreOpener-5fe4277e260d383df66e709c1a38f479-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/5fe4277e260d383df66e709c1a38f479/f 2023-07-11 18:17:15,052 DEBUG [StoreOpener-2c4b4ff55648d797684f4a45b99d77d7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/2c4b4ff55648d797684f4a45b99d77d7/f 2023-07-11 18:17:15,052 DEBUG [StoreOpener-5fe4277e260d383df66e709c1a38f479-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/5fe4277e260d383df66e709c1a38f479/f 2023-07-11 18:17:15,053 INFO [StoreOpener-2c4b4ff55648d797684f4a45b99d77d7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2c4b4ff55648d797684f4a45b99d77d7 columnFamilyName f 2023-07-11 18:17:15,053 INFO [StoreOpener-5fe4277e260d383df66e709c1a38f479-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5fe4277e260d383df66e709c1a38f479 columnFamilyName f 2023-07-11 18:17:15,053 INFO [StoreOpener-2c4b4ff55648d797684f4a45b99d77d7-1] regionserver.HStore(310): Store=2c4b4ff55648d797684f4a45b99d77d7/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:15,054 INFO [StoreOpener-5fe4277e260d383df66e709c1a38f479-1] regionserver.HStore(310): Store=5fe4277e260d383df66e709c1a38f479/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:15,054 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/2c4b4ff55648d797684f4a45b99d77d7 2023-07-11 18:17:15,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/5fe4277e260d383df66e709c1a38f479 2023-07-11 18:17:15,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/2c4b4ff55648d797684f4a45b99d77d7 2023-07-11 18:17:15,055 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/5fe4277e260d383df66e709c1a38f479 2023-07-11 18:17:15,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5fe4277e260d383df66e709c1a38f479 2023-07-11 18:17:15,058 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2c4b4ff55648d797684f4a45b99d77d7 2023-07-11 18:17:15,060 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/5fe4277e260d383df66e709c1a38f479/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:15,060 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/2c4b4ff55648d797684f4a45b99d77d7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:15,061 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5fe4277e260d383df66e709c1a38f479; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10265446240, jitterRate=-0.04395581781864166}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:15,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5fe4277e260d383df66e709c1a38f479: 2023-07-11 18:17:15,061 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2c4b4ff55648d797684f4a45b99d77d7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11637274720, jitterRate=0.08380566537380219}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:15,061 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2c4b4ff55648d797684f4a45b99d77d7: 2023-07-11 18:17:15,061 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479., pid=144, masterSystemTime=1689099435015 2023-07-11 18:17:15,061 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7., pid=143, masterSystemTime=1689099435016 2023-07-11 18:17:15,063 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7. 2023-07-11 18:17:15,063 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7. 2023-07-11 18:17:15,063 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=139 updating hbase:meta row=2c4b4ff55648d797684f4a45b99d77d7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:15,063 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689099435063"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099435063"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099435063"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099435063"}]},"ts":"1689099435063"} 2023-07-11 18:17:15,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479. 2023-07-11 18:17:15,064 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479. 2023-07-11 18:17:15,064 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b. 2023-07-11 18:17:15,064 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => da064b5e4470067992b17a8791e8344b, NAME => 'Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-11 18:17:15,065 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=138 updating hbase:meta row=5fe4277e260d383df66e709c1a38f479, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:15,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove da064b5e4470067992b17a8791e8344b 2023-07-11 18:17:15,065 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689099435065"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099435065"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099435065"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099435065"}]},"ts":"1689099435065"} 2023-07-11 18:17:15,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:15,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for da064b5e4470067992b17a8791e8344b 2023-07-11 18:17:15,065 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for da064b5e4470067992b17a8791e8344b 2023-07-11 18:17:15,066 INFO [StoreOpener-da064b5e4470067992b17a8791e8344b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region da064b5e4470067992b17a8791e8344b 2023-07-11 18:17:15,068 DEBUG [StoreOpener-da064b5e4470067992b17a8791e8344b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/da064b5e4470067992b17a8791e8344b/f 2023-07-11 18:17:15,068 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=143, resume processing ppid=139 2023-07-11 18:17:15,068 DEBUG [StoreOpener-da064b5e4470067992b17a8791e8344b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/da064b5e4470067992b17a8791e8344b/f 2023-07-11 18:17:15,068 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=139, state=SUCCESS; OpenRegionProcedure 2c4b4ff55648d797684f4a45b99d77d7, server=jenkins-hbase4.apache.org,45821,1689099407865 in 202 msec 2023-07-11 18:17:15,068 INFO [StoreOpener-da064b5e4470067992b17a8791e8344b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region da064b5e4470067992b17a8791e8344b columnFamilyName f 2023-07-11 18:17:15,069 INFO [StoreOpener-da064b5e4470067992b17a8791e8344b-1] regionserver.HStore(310): Store=da064b5e4470067992b17a8791e8344b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:15,069 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2c4b4ff55648d797684f4a45b99d77d7, ASSIGN in 365 msec 2023-07-11 18:17:15,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/da064b5e4470067992b17a8791e8344b 2023-07-11 18:17:15,070 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/da064b5e4470067992b17a8791e8344b 2023-07-11 18:17:15,073 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=138 2023-07-11 18:17:15,073 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for da064b5e4470067992b17a8791e8344b 2023-07-11 18:17:15,073 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=138, state=SUCCESS; OpenRegionProcedure 5fe4277e260d383df66e709c1a38f479, server=jenkins-hbase4.apache.org,45471,1689099407428 in 207 msec 2023-07-11 18:17:15,074 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5fe4277e260d383df66e709c1a38f479, ASSIGN in 370 msec 2023-07-11 18:17:15,075 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/da064b5e4470067992b17a8791e8344b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:15,076 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened da064b5e4470067992b17a8791e8344b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10960707040, jitterRate=0.020795390009880066}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:15,076 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for da064b5e4470067992b17a8791e8344b: 2023-07-11 18:17:15,077 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b., pid=145, masterSystemTime=1689099435015 2023-07-11 18:17:15,078 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b. 2023-07-11 18:17:15,078 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b. 2023-07-11 18:17:15,078 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=140 updating hbase:meta row=da064b5e4470067992b17a8791e8344b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:15,078 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689099435078"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099435078"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099435078"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099435078"}]},"ts":"1689099435078"} 2023-07-11 18:17:15,081 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=145, resume processing ppid=140 2023-07-11 18:17:15,081 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=140, state=SUCCESS; OpenRegionProcedure da064b5e4470067992b17a8791e8344b, server=jenkins-hbase4.apache.org,45471,1689099407428 in 213 msec 2023-07-11 18:17:15,082 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=135 2023-07-11 18:17:15,082 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=135, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=da064b5e4470067992b17a8791e8344b, ASSIGN in 378 msec 2023-07-11 18:17:15,083 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 18:17:15,083 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099435083"}]},"ts":"1689099435083"} 2023-07-11 18:17:15,084 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-11 18:17:15,086 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=135, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 18:17:15,088 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=135, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 468 msec 2023-07-11 18:17:15,225 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=135 2023-07-11 18:17:15,225 INFO [Listener at localhost/35107] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 135 completed 2023-07-11 18:17:15,225 DEBUG [Listener at localhost/35107] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-11 18:17:15,226 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:15,229 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-11 18:17:15,229 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:15,230 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-11 18:17:15,230 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:15,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-11 18:17:15,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 18:17:15,239 INFO [Listener at localhost/35107] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-11 18:17:15,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-11 18:17:15,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=146, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-11 18:17:15,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-11 18:17:15,244 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099435244"}]},"ts":"1689099435244"} 2023-07-11 18:17:15,246 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-11 18:17:15,247 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-11 18:17:15,248 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f6857f11cdd5f55c00ab2ea8e6b26b40, UNASSIGN}, {pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f8b033235a4159df704c1fd3ca34f19, UNASSIGN}, {pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5fe4277e260d383df66e709c1a38f479, UNASSIGN}, {pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2c4b4ff55648d797684f4a45b99d77d7, UNASSIGN}, {pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=da064b5e4470067992b17a8791e8344b, UNASSIGN}] 2023-07-11 18:17:15,250 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=150, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2c4b4ff55648d797684f4a45b99d77d7, UNASSIGN 2023-07-11 18:17:15,250 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=149, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5fe4277e260d383df66e709c1a38f479, UNASSIGN 2023-07-11 18:17:15,250 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=151, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=da064b5e4470067992b17a8791e8344b, UNASSIGN 2023-07-11 18:17:15,251 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f8b033235a4159df704c1fd3ca34f19, UNASSIGN 2023-07-11 18:17:15,251 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=146, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f6857f11cdd5f55c00ab2ea8e6b26b40, UNASSIGN 2023-07-11 18:17:15,251 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=2c4b4ff55648d797684f4a45b99d77d7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:15,251 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689099435251"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099435251"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099435251"}]},"ts":"1689099435251"} 2023-07-11 18:17:15,251 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=5fe4277e260d383df66e709c1a38f479, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:15,251 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689099435251"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099435251"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099435251"}]},"ts":"1689099435251"} 2023-07-11 18:17:15,255 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=da064b5e4470067992b17a8791e8344b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:15,255 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689099435255"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099435255"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099435255"}]},"ts":"1689099435255"} 2023-07-11 18:17:15,255 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=0f8b033235a4159df704c1fd3ca34f19, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:15,255 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689099435255"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099435255"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099435255"}]},"ts":"1689099435255"} 2023-07-11 18:17:15,255 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=f6857f11cdd5f55c00ab2ea8e6b26b40, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:15,255 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689099435255"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099435255"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099435255"}]},"ts":"1689099435255"} 2023-07-11 18:17:15,258 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=150, state=RUNNABLE; CloseRegionProcedure 2c4b4ff55648d797684f4a45b99d77d7, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:17:15,258 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=149, state=RUNNABLE; CloseRegionProcedure 5fe4277e260d383df66e709c1a38f479, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:17:15,259 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=154, ppid=151, state=RUNNABLE; CloseRegionProcedure da064b5e4470067992b17a8791e8344b, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:17:15,260 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=155, ppid=148, state=RUNNABLE; CloseRegionProcedure 0f8b033235a4159df704c1fd3ca34f19, server=jenkins-hbase4.apache.org,45821,1689099407865}] 2023-07-11 18:17:15,261 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=156, ppid=147, state=RUNNABLE; CloseRegionProcedure f6857f11cdd5f55c00ab2ea8e6b26b40, server=jenkins-hbase4.apache.org,45471,1689099407428}] 2023-07-11 18:17:15,345 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-11 18:17:15,411 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2c4b4ff55648d797684f4a45b99d77d7 2023-07-11 18:17:15,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2c4b4ff55648d797684f4a45b99d77d7, disabling compactions & flushes 2023-07-11 18:17:15,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7. 2023-07-11 18:17:15,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7. 2023-07-11 18:17:15,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7. after waiting 0 ms 2023-07-11 18:17:15,412 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7. 2023-07-11 18:17:15,412 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f6857f11cdd5f55c00ab2ea8e6b26b40 2023-07-11 18:17:15,413 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f6857f11cdd5f55c00ab2ea8e6b26b40, disabling compactions & flushes 2023-07-11 18:17:15,414 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40. 2023-07-11 18:17:15,414 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40. 2023-07-11 18:17:15,414 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40. after waiting 0 ms 2023-07-11 18:17:15,414 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40. 2023-07-11 18:17:15,418 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/2c4b4ff55648d797684f4a45b99d77d7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:17:15,418 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/f6857f11cdd5f55c00ab2ea8e6b26b40/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:17:15,419 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7. 2023-07-11 18:17:15,419 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2c4b4ff55648d797684f4a45b99d77d7: 2023-07-11 18:17:15,419 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40. 2023-07-11 18:17:15,419 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f6857f11cdd5f55c00ab2ea8e6b26b40: 2023-07-11 18:17:15,420 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2c4b4ff55648d797684f4a45b99d77d7 2023-07-11 18:17:15,420 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0f8b033235a4159df704c1fd3ca34f19 2023-07-11 18:17:15,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0f8b033235a4159df704c1fd3ca34f19, disabling compactions & flushes 2023-07-11 18:17:15,422 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19. 2023-07-11 18:17:15,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19. 2023-07-11 18:17:15,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19. after waiting 0 ms 2023-07-11 18:17:15,422 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19. 2023-07-11 18:17:15,422 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=150 updating hbase:meta row=2c4b4ff55648d797684f4a45b99d77d7, regionState=CLOSED 2023-07-11 18:17:15,422 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689099435422"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099435422"}]},"ts":"1689099435422"} 2023-07-11 18:17:15,422 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f6857f11cdd5f55c00ab2ea8e6b26b40 2023-07-11 18:17:15,422 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close da064b5e4470067992b17a8791e8344b 2023-07-11 18:17:15,423 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing da064b5e4470067992b17a8791e8344b, disabling compactions & flushes 2023-07-11 18:17:15,424 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b. 2023-07-11 18:17:15,424 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b. 2023-07-11 18:17:15,424 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b. after waiting 0 ms 2023-07-11 18:17:15,424 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b. 2023-07-11 18:17:15,424 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=f6857f11cdd5f55c00ab2ea8e6b26b40, regionState=CLOSED 2023-07-11 18:17:15,424 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689099435424"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099435424"}]},"ts":"1689099435424"} 2023-07-11 18:17:15,428 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=150 2023-07-11 18:17:15,428 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=150, state=SUCCESS; CloseRegionProcedure 2c4b4ff55648d797684f4a45b99d77d7, server=jenkins-hbase4.apache.org,45821,1689099407865 in 166 msec 2023-07-11 18:17:15,429 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=156, resume processing ppid=147 2023-07-11 18:17:15,430 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=156, ppid=147, state=SUCCESS; CloseRegionProcedure f6857f11cdd5f55c00ab2ea8e6b26b40, server=jenkins-hbase4.apache.org,45471,1689099407428 in 165 msec 2023-07-11 18:17:15,430 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=2c4b4ff55648d797684f4a45b99d77d7, UNASSIGN in 180 msec 2023-07-11 18:17:15,431 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=f6857f11cdd5f55c00ab2ea8e6b26b40, UNASSIGN in 181 msec 2023-07-11 18:17:15,431 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/0f8b033235a4159df704c1fd3ca34f19/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:17:15,432 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19. 2023-07-11 18:17:15,432 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0f8b033235a4159df704c1fd3ca34f19: 2023-07-11 18:17:15,433 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0f8b033235a4159df704c1fd3ca34f19 2023-07-11 18:17:15,434 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=0f8b033235a4159df704c1fd3ca34f19, regionState=CLOSED 2023-07-11 18:17:15,434 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689099435434"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099435434"}]},"ts":"1689099435434"} 2023-07-11 18:17:15,437 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=155, resume processing ppid=148 2023-07-11 18:17:15,437 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=155, ppid=148, state=SUCCESS; CloseRegionProcedure 0f8b033235a4159df704c1fd3ca34f19, server=jenkins-hbase4.apache.org,45821,1689099407865 in 176 msec 2023-07-11 18:17:15,438 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=0f8b033235a4159df704c1fd3ca34f19, UNASSIGN in 189 msec 2023-07-11 18:17:15,443 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/da064b5e4470067992b17a8791e8344b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:17:15,444 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b. 2023-07-11 18:17:15,444 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for da064b5e4470067992b17a8791e8344b: 2023-07-11 18:17:15,445 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed da064b5e4470067992b17a8791e8344b 2023-07-11 18:17:15,445 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5fe4277e260d383df66e709c1a38f479 2023-07-11 18:17:15,446 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5fe4277e260d383df66e709c1a38f479, disabling compactions & flushes 2023-07-11 18:17:15,446 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479. 2023-07-11 18:17:15,446 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=151 updating hbase:meta row=da064b5e4470067992b17a8791e8344b, regionState=CLOSED 2023-07-11 18:17:15,446 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479. 2023-07-11 18:17:15,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479. after waiting 0 ms 2023-07-11 18:17:15,447 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689099435446"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099435446"}]},"ts":"1689099435446"} 2023-07-11 18:17:15,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479. 2023-07-11 18:17:15,450 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=154, resume processing ppid=151 2023-07-11 18:17:15,450 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=154, ppid=151, state=SUCCESS; CloseRegionProcedure da064b5e4470067992b17a8791e8344b, server=jenkins-hbase4.apache.org,45471,1689099407428 in 189 msec 2023-07-11 18:17:15,451 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/Group_testDisabledTableMove/5fe4277e260d383df66e709c1a38f479/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:17:15,451 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=da064b5e4470067992b17a8791e8344b, UNASSIGN in 202 msec 2023-07-11 18:17:15,451 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479. 2023-07-11 18:17:15,451 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5fe4277e260d383df66e709c1a38f479: 2023-07-11 18:17:15,453 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5fe4277e260d383df66e709c1a38f479 2023-07-11 18:17:15,453 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=149 updating hbase:meta row=5fe4277e260d383df66e709c1a38f479, regionState=CLOSED 2023-07-11 18:17:15,453 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689099435453"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099435453"}]},"ts":"1689099435453"} 2023-07-11 18:17:15,456 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=149 2023-07-11 18:17:15,456 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=149, state=SUCCESS; CloseRegionProcedure 5fe4277e260d383df66e709c1a38f479, server=jenkins-hbase4.apache.org,45471,1689099407428 in 197 msec 2023-07-11 18:17:15,458 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=146 2023-07-11 18:17:15,458 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=146, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=5fe4277e260d383df66e709c1a38f479, UNASSIGN in 208 msec 2023-07-11 18:17:15,459 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099435458"}]},"ts":"1689099435458"} 2023-07-11 18:17:15,460 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-11 18:17:15,462 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-11 18:17:15,464 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=146, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 222 msec 2023-07-11 18:17:15,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=146 2023-07-11 18:17:15,547 INFO [Listener at localhost/35107] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 146 completed 2023-07-11 18:17:15,547 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1161086009 2023-07-11 18:17:15,549 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1161086009 2023-07-11 18:17:15,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:15,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1161086009 2023-07-11 18:17:15,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:15,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:15,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-11 18:17:15,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1161086009, current retry=0 2023-07-11 18:17:15,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1161086009. 2023-07-11 18:17:15,559 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:15,561 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:15,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:15,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-11 18:17:15,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 18:17:15,566 INFO [Listener at localhost/35107] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-11 18:17:15,566 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-11 18:17:15,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:15,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 924 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:57346 deadline: 1689099495566, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-11 18:17:15,567 DEBUG [Listener at localhost/35107] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-11 18:17:15,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-11 18:17:15,569 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] procedure2.ProcedureExecutor(1029): Stored pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-11 18:17:15,571 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=158, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-11 18:17:15,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1161086009' 2023-07-11 18:17:15,572 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=158, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-11 18:17:15,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:15,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1161086009 2023-07-11 18:17:15,574 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:15,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:15,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-11 18:17:15,581 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/f6857f11cdd5f55c00ab2ea8e6b26b40 2023-07-11 18:17:15,581 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/da064b5e4470067992b17a8791e8344b 2023-07-11 18:17:15,581 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/2c4b4ff55648d797684f4a45b99d77d7 2023-07-11 18:17:15,581 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/5fe4277e260d383df66e709c1a38f479 2023-07-11 18:17:15,581 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/0f8b033235a4159df704c1fd3ca34f19 2023-07-11 18:17:15,584 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/da064b5e4470067992b17a8791e8344b/f, FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/da064b5e4470067992b17a8791e8344b/recovered.edits] 2023-07-11 18:17:15,585 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/5fe4277e260d383df66e709c1a38f479/f, FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/5fe4277e260d383df66e709c1a38f479/recovered.edits] 2023-07-11 18:17:15,585 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/0f8b033235a4159df704c1fd3ca34f19/f, FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/0f8b033235a4159df704c1fd3ca34f19/recovered.edits] 2023-07-11 18:17:15,585 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/f6857f11cdd5f55c00ab2ea8e6b26b40/f, FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/f6857f11cdd5f55c00ab2ea8e6b26b40/recovered.edits] 2023-07-11 18:17:15,585 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/2c4b4ff55648d797684f4a45b99d77d7/f, FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/2c4b4ff55648d797684f4a45b99d77d7/recovered.edits] 2023-07-11 18:17:15,601 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/0f8b033235a4159df704c1fd3ca34f19/recovered.edits/4.seqid to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/archive/data/default/Group_testDisabledTableMove/0f8b033235a4159df704c1fd3ca34f19/recovered.edits/4.seqid 2023-07-11 18:17:15,602 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/da064b5e4470067992b17a8791e8344b/recovered.edits/4.seqid to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/archive/data/default/Group_testDisabledTableMove/da064b5e4470067992b17a8791e8344b/recovered.edits/4.seqid 2023-07-11 18:17:15,602 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/5fe4277e260d383df66e709c1a38f479/recovered.edits/4.seqid to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/archive/data/default/Group_testDisabledTableMove/5fe4277e260d383df66e709c1a38f479/recovered.edits/4.seqid 2023-07-11 18:17:15,604 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/0f8b033235a4159df704c1fd3ca34f19 2023-07-11 18:17:15,604 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/da064b5e4470067992b17a8791e8344b 2023-07-11 18:17:15,604 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/f6857f11cdd5f55c00ab2ea8e6b26b40/recovered.edits/4.seqid to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/archive/data/default/Group_testDisabledTableMove/f6857f11cdd5f55c00ab2ea8e6b26b40/recovered.edits/4.seqid 2023-07-11 18:17:15,604 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/2c4b4ff55648d797684f4a45b99d77d7/recovered.edits/4.seqid to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/archive/data/default/Group_testDisabledTableMove/2c4b4ff55648d797684f4a45b99d77d7/recovered.edits/4.seqid 2023-07-11 18:17:15,604 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/5fe4277e260d383df66e709c1a38f479 2023-07-11 18:17:15,606 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/f6857f11cdd5f55c00ab2ea8e6b26b40 2023-07-11 18:17:15,606 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/.tmp/data/default/Group_testDisabledTableMove/2c4b4ff55648d797684f4a45b99d77d7 2023-07-11 18:17:15,606 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-11 18:17:15,610 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=158, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-11 18:17:15,613 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-11 18:17:15,619 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-11 18:17:15,620 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=158, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-11 18:17:15,620 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-11 18:17:15,620 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099435620"}]},"ts":"9223372036854775807"} 2023-07-11 18:17:15,620 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099435620"}]},"ts":"9223372036854775807"} 2023-07-11 18:17:15,620 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099435620"}]},"ts":"9223372036854775807"} 2023-07-11 18:17:15,620 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099435620"}]},"ts":"9223372036854775807"} 2023-07-11 18:17:15,621 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099435620"}]},"ts":"9223372036854775807"} 2023-07-11 18:17:15,622 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-11 18:17:15,622 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => f6857f11cdd5f55c00ab2ea8e6b26b40, NAME => 'Group_testDisabledTableMove,,1689099434618.f6857f11cdd5f55c00ab2ea8e6b26b40.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 0f8b033235a4159df704c1fd3ca34f19, NAME => 'Group_testDisabledTableMove,aaaaa,1689099434618.0f8b033235a4159df704c1fd3ca34f19.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 5fe4277e260d383df66e709c1a38f479, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689099434618.5fe4277e260d383df66e709c1a38f479.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 2c4b4ff55648d797684f4a45b99d77d7, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689099434618.2c4b4ff55648d797684f4a45b99d77d7.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => da064b5e4470067992b17a8791e8344b, NAME => 'Group_testDisabledTableMove,zzzzz,1689099434618.da064b5e4470067992b17a8791e8344b.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-11 18:17:15,623 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-11 18:17:15,623 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689099435623"}]},"ts":"9223372036854775807"} 2023-07-11 18:17:15,624 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-11 18:17:15,626 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=158, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-11 18:17:15,627 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=158, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 58 msec 2023-07-11 18:17:15,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(1230): Checking to see if procedure is done pid=158 2023-07-11 18:17:15,679 INFO [Listener at localhost/35107] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 158 completed 2023-07-11 18:17:15,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:15,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:15,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:15,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:15,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:15,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:37773] to rsgroup default 2023-07-11 18:17:15,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:15,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1161086009 2023-07-11 18:17:15,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:15,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:15,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1161086009, current retry=0 2023-07-11 18:17:15,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37773,1689099407650, jenkins-hbase4.apache.org,37889,1689099411612] are moved back to Group_testDisabledTableMove_1161086009 2023-07-11 18:17:15,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1161086009 => default 2023-07-11 18:17:15,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:15,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_1161086009 2023-07-11 18:17:15,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:15,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:15,693 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-11 18:17:15,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:15,695 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:15,696 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:15,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:15,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:15,696 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:15,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:15,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:15,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:15,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:15,706 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:15,707 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:15,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:15,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:15,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:15,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:15,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:15,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:15,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45397] to rsgroup master 2023-07-11 18:17:15,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:15,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 958 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57346 deadline: 1689100635725, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. 2023-07-11 18:17:15,726 WARN [Listener at localhost/35107] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:15,728 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:15,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:15,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:15,729 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37773, jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:45821], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:15,729 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:15,730 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:15,751 INFO [Listener at localhost/35107] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=524 (was 521) Potentially hanging thread: hconnection-0x51dbb1fe-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1964151006_17 at /127.0.0.1:57786 [Waiting for operation #14] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7b3db8b3-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1586470521_17 at /127.0.0.1:60800 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=820 (was 800) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=563 (was 563), ProcessCount=170 (was 170), AvailableMemoryMB=4304 (was 4292) - AvailableMemoryMB LEAK? - 2023-07-11 18:17:15,751 WARN [Listener at localhost/35107] hbase.ResourceChecker(130): Thread=524 is superior to 500 2023-07-11 18:17:15,774 INFO [Listener at localhost/35107] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=524, OpenFileDescriptor=820, MaxFileDescriptor=60000, SystemLoadAverage=563, ProcessCount=170, AvailableMemoryMB=4305 2023-07-11 18:17:15,774 WARN [Listener at localhost/35107] hbase.ResourceChecker(130): Thread=524 is superior to 500 2023-07-11 18:17:15,774 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-11 18:17:15,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:15,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:15,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:15,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:15,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:15,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:15,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:15,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:15,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:15,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:15,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:15,793 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:15,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:15,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:15,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:15,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:15,799 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:15,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:15,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:15,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45397] to rsgroup master 2023-07-11 18:17:15,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:15,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] ipc.CallRunner(144): callId: 986 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:57346 deadline: 1689100635804, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. 2023-07-11 18:17:15,804 WARN [Listener at localhost/35107] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45397 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:15,806 INFO [Listener at localhost/35107] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:15,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:15,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:15,807 INFO [Listener at localhost/35107] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37773, jenkins-hbase4.apache.org:37889, jenkins-hbase4.apache.org:45471, jenkins-hbase4.apache.org:45821], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:15,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:15,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45397] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:15,808 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-11 18:17:15,808 INFO [Listener at localhost/35107] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-11 18:17:15,809 DEBUG [Listener at localhost/35107] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x326b7986 to 127.0.0.1:58592 2023-07-11 18:17:15,809 DEBUG [Listener at localhost/35107] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:15,812 DEBUG [Listener at localhost/35107] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-11 18:17:15,812 DEBUG [Listener at localhost/35107] util.JVMClusterUtil(257): Found active master hash=1464936596, stopped=false 2023-07-11 18:17:15,812 DEBUG [Listener at localhost/35107] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-11 18:17:15,812 DEBUG [Listener at localhost/35107] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-11 18:17:15,812 INFO [Listener at localhost/35107] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,45397,1689099405546 2023-07-11 18:17:15,815 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:37889-0x101559a084d000b, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:15,815 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:15,815 INFO [Listener at localhost/35107] procedure2.ProcedureExecutor(629): Stopping 2023-07-11 18:17:15,815 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:15,815 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:15,815 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:15,815 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:15,816 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:15,816 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:15,816 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:15,816 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:15,816 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37889-0x101559a084d000b, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:15,817 DEBUG [Listener at localhost/35107] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x26cd00a3 to 127.0.0.1:58592 2023-07-11 18:17:15,817 DEBUG [Listener at localhost/35107] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:15,817 INFO [Listener at localhost/35107] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45471,1689099407428' ***** 2023-07-11 18:17:15,817 INFO [Listener at localhost/35107] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 18:17:15,817 INFO [Listener at localhost/35107] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37773,1689099407650' ***** 2023-07-11 18:17:15,817 INFO [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 18:17:15,817 INFO [Listener at localhost/35107] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 18:17:15,818 INFO [Listener at localhost/35107] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45821,1689099407865' ***** 2023-07-11 18:17:15,818 INFO [Listener at localhost/35107] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 18:17:15,818 INFO [RS:1;jenkins-hbase4:37773] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 18:17:15,818 INFO [Listener at localhost/35107] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37889,1689099411612' ***** 2023-07-11 18:17:15,818 INFO [Listener at localhost/35107] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 18:17:15,818 INFO [RS:3;jenkins-hbase4:37889] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 18:17:15,818 INFO [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 18:17:15,825 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 18:17:15,827 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:15,827 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:15,828 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 18:17:15,829 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 18:17:15,829 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:15,830 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:15,830 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 18:17:15,836 INFO [RS:3;jenkins-hbase4:37889] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@a749b0e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:17:15,836 INFO [RS:0;jenkins-hbase4:45471] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6e1656d5{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:17:15,836 INFO [RS:1;jenkins-hbase4:37773] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@324c91bf{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:17:15,836 INFO [RS:2;jenkins-hbase4:45821] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3d078368{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:17:15,836 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-11 18:17:15,836 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-11 18:17:15,836 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-11 18:17:15,837 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-11 18:17:15,840 INFO [RS:0;jenkins-hbase4:45471] server.AbstractConnector(383): Stopped ServerConnector@5a7c8feb{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 18:17:15,840 INFO [RS:2;jenkins-hbase4:45821] server.AbstractConnector(383): Stopped ServerConnector@6b30b1b6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 18:17:15,840 INFO [RS:3;jenkins-hbase4:37889] server.AbstractConnector(383): Stopped ServerConnector@2fbb41cf{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 18:17:15,840 INFO [RS:1;jenkins-hbase4:37773] server.AbstractConnector(383): Stopped ServerConnector@c8fca5f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 18:17:15,840 INFO [RS:3;jenkins-hbase4:37889] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 18:17:15,840 INFO [RS:2;jenkins-hbase4:45821] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 18:17:15,840 INFO [RS:0;jenkins-hbase4:45471] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 18:17:15,841 INFO [RS:3;jenkins-hbase4:37889] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@593950e0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 18:17:15,840 INFO [RS:1;jenkins-hbase4:37773] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 18:17:15,843 INFO [RS:0;jenkins-hbase4:45471] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@49a4b2bd{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 18:17:15,843 INFO [RS:3;jenkins-hbase4:37889] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@480cfad4{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/hadoop.log.dir/,STOPPED} 2023-07-11 18:17:15,844 INFO [RS:0;jenkins-hbase4:45471] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1eb685a1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/hadoop.log.dir/,STOPPED} 2023-07-11 18:17:15,843 INFO [RS:2;jenkins-hbase4:45821] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3743c5eb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 18:17:15,844 INFO [RS:1;jenkins-hbase4:37773] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@29e64012{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 18:17:15,845 INFO [RS:2;jenkins-hbase4:45821] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@505a01fa{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/hadoop.log.dir/,STOPPED} 2023-07-11 18:17:15,845 INFO [RS:1;jenkins-hbase4:37773] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7fe3f683{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/hadoop.log.dir/,STOPPED} 2023-07-11 18:17:15,848 INFO [RS:2;jenkins-hbase4:45821] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 18:17:15,848 INFO [RS:0;jenkins-hbase4:45471] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 18:17:15,848 INFO [RS:2;jenkins-hbase4:45821] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 18:17:15,848 INFO [RS:0;jenkins-hbase4:45471] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 18:17:15,848 INFO [RS:2;jenkins-hbase4:45821] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 18:17:15,848 INFO [RS:0;jenkins-hbase4:45471] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 18:17:15,848 INFO [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(3305): Received CLOSE for dcd96e07bbafe36befea35835c2da151 2023-07-11 18:17:15,848 INFO [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer(3305): Received CLOSE for 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:15,849 INFO [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:15,849 INFO [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(3305): Received CLOSE for b41d0021b1b281d3ab8046d2e4311514 2023-07-11 18:17:15,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4c6a8d44fa4722459b1960c3bec455c9, disabling compactions & flushes 2023-07-11 18:17:15,850 INFO [RS:1;jenkins-hbase4:37773] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 18:17:15,849 DEBUG [RS:0;jenkins-hbase4:45471] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x27d8117e to 127.0.0.1:58592 2023-07-11 18:17:15,850 INFO [RS:1;jenkins-hbase4:37773] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 18:17:15,850 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:15,850 INFO [RS:3;jenkins-hbase4:37889] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 18:17:15,850 INFO [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(3305): Received CLOSE for ddeec04b60fd5a8c2d4719765d0b2735 2023-07-11 18:17:15,851 INFO [RS:3;jenkins-hbase4:37889] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 18:17:15,851 INFO [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:15,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dcd96e07bbafe36befea35835c2da151, disabling compactions & flushes 2023-07-11 18:17:15,851 DEBUG [RS:2;jenkins-hbase4:45821] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1039ce8b to 127.0.0.1:58592 2023-07-11 18:17:15,851 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:15,851 INFO [RS:3;jenkins-hbase4:37889] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 18:17:15,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:15,851 INFO [RS:1;jenkins-hbase4:37773] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 18:17:15,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. after waiting 0 ms 2023-07-11 18:17:15,851 INFO [RS:1;jenkins-hbase4:37773] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:17:15,850 DEBUG [RS:0;jenkins-hbase4:45471] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:15,851 DEBUG [RS:1;jenkins-hbase4:37773] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x531542d5 to 127.0.0.1:58592 2023-07-11 18:17:15,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:15,851 INFO [RS:3;jenkins-hbase4:37889] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:17:15,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:15,852 DEBUG [RS:3;jenkins-hbase4:37889] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7bf55d29 to 127.0.0.1:58592 2023-07-11 18:17:15,851 DEBUG [RS:2;jenkins-hbase4:45821] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:15,852 DEBUG [RS:3;jenkins-hbase4:37889] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:15,852 INFO [RS:2;jenkins-hbase4:45821] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 18:17:15,852 INFO [RS:2;jenkins-hbase4:45821] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 18:17:15,852 INFO [RS:2;jenkins-hbase4:45821] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 18:17:15,852 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. after waiting 0 ms 2023-07-11 18:17:15,851 INFO [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-11 18:17:15,851 DEBUG [RS:1;jenkins-hbase4:37773] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:15,852 DEBUG [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer(1478): Online Regions={4c6a8d44fa4722459b1960c3bec455c9=testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9.} 2023-07-11 18:17:15,852 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:15,852 INFO [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-11 18:17:15,852 INFO [RS:3;jenkins-hbase4:37889] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37889,1689099411612; all regions closed. 2023-07-11 18:17:15,853 INFO [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-11 18:17:15,853 DEBUG [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(1478): Online Regions={dcd96e07bbafe36befea35835c2da151=unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151., b41d0021b1b281d3ab8046d2e4311514=hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514., 1588230740=hbase:meta,,1.1588230740, ddeec04b60fd5a8c2d4719765d0b2735=hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735.} 2023-07-11 18:17:15,853 DEBUG [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(1504): Waiting on 1588230740, b41d0021b1b281d3ab8046d2e4311514, dcd96e07bbafe36befea35835c2da151, ddeec04b60fd5a8c2d4719765d0b2735 2023-07-11 18:17:15,852 INFO [RS:1;jenkins-hbase4:37773] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37773,1689099407650; all regions closed. 2023-07-11 18:17:15,853 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-11 18:17:15,854 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-11 18:17:15,854 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-11 18:17:15,854 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-11 18:17:15,854 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-11 18:17:15,854 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=37.46 KB heapSize=61.09 KB 2023-07-11 18:17:15,853 DEBUG [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer(1504): Waiting on 4c6a8d44fa4722459b1960c3bec455c9 2023-07-11 18:17:15,858 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/testRename/4c6a8d44fa4722459b1960c3bec455c9/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-11 18:17:15,859 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:15,859 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4c6a8d44fa4722459b1960c3bec455c9: 2023-07-11 18:17:15,859 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/default/unmovedTable/dcd96e07bbafe36befea35835c2da151/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-11 18:17:15,859 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689099428483.4c6a8d44fa4722459b1960c3bec455c9. 2023-07-11 18:17:15,862 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:15,862 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dcd96e07bbafe36befea35835c2da151: 2023-07-11 18:17:15,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689099430673.dcd96e07bbafe36befea35835c2da151. 2023-07-11 18:17:15,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b41d0021b1b281d3ab8046d2e4311514, disabling compactions & flushes 2023-07-11 18:17:15,863 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:17:15,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:17:15,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. after waiting 0 ms 2023-07-11 18:17:15,863 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:17:15,863 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing b41d0021b1b281d3ab8046d2e4311514 1/1 column families, dataSize=22.08 KB heapSize=36.54 KB 2023-07-11 18:17:15,869 DEBUG [RS:3;jenkins-hbase4:37889] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/oldWALs 2023-07-11 18:17:15,869 INFO [RS:3;jenkins-hbase4:37889] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37889%2C1689099411612:(num 1689099412024) 2023-07-11 18:17:15,869 DEBUG [RS:3;jenkins-hbase4:37889] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:15,869 INFO [RS:3;jenkins-hbase4:37889] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:15,875 INFO [RS:3;jenkins-hbase4:37889] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-11 18:17:15,875 INFO [RS:3;jenkins-hbase4:37889] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 18:17:15,875 INFO [RS:3;jenkins-hbase4:37889] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 18:17:15,875 INFO [RS:3;jenkins-hbase4:37889] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 18:17:15,875 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 18:17:15,877 INFO [RS:3;jenkins-hbase4:37889] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37889 2023-07-11 18:17:15,878 DEBUG [RS:1;jenkins-hbase4:37773] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/oldWALs 2023-07-11 18:17:15,878 INFO [RS:1;jenkins-hbase4:37773] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37773%2C1689099407650.meta:.meta(num 1689099410120) 2023-07-11 18:17:15,885 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:17:15,885 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:17:15,885 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:37889-0x101559a084d000b, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:17:15,885 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:15,885 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:15,885 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37889,1689099411612 2023-07-11 18:17:15,885 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:15,885 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:37889-0x101559a084d000b, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:15,885 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:15,886 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37889,1689099411612] 2023-07-11 18:17:15,886 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37889,1689099411612; numProcessing=1 2023-07-11 18:17:15,896 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37889,1689099411612 already deleted, retry=false 2023-07-11 18:17:15,896 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37889,1689099411612 expired; onlineServers=3 2023-07-11 18:17:15,897 DEBUG [RS:1;jenkins-hbase4:37773] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/oldWALs 2023-07-11 18:17:15,897 INFO [RS:1;jenkins-hbase4:37773] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37773%2C1689099407650:(num 1689099409969) 2023-07-11 18:17:15,898 DEBUG [RS:1;jenkins-hbase4:37773] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:15,898 INFO [RS:1;jenkins-hbase4:37773] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:15,905 INFO [RS:1;jenkins-hbase4:37773] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-11 18:17:15,905 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=34.54 KB at sequenceid=216 (bloomFilter=false), to=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/.tmp/info/72cf6f8bdc3b44009789206536ff285f 2023-07-11 18:17:15,906 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 18:17:15,906 INFO [RS:1;jenkins-hbase4:37773] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 18:17:15,906 INFO [RS:1;jenkins-hbase4:37773] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 18:17:15,906 INFO [RS:1;jenkins-hbase4:37773] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 18:17:15,907 INFO [RS:1;jenkins-hbase4:37773] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37773 2023-07-11 18:17:15,912 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:15,912 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:17:15,912 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:17:15,913 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37773,1689099407650 2023-07-11 18:17:15,913 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37773,1689099407650] 2023-07-11 18:17:15,913 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37773,1689099407650; numProcessing=2 2023-07-11 18:17:15,914 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.08 KB at sequenceid=107 (bloomFilter=true), to=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/.tmp/m/f137a5228ac24f86981374e80ce4a16c 2023-07-11 18:17:15,918 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 72cf6f8bdc3b44009789206536ff285f 2023-07-11 18:17:15,923 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f137a5228ac24f86981374e80ce4a16c 2023-07-11 18:17:15,924 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/.tmp/m/f137a5228ac24f86981374e80ce4a16c as hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/m/f137a5228ac24f86981374e80ce4a16c 2023-07-11 18:17:15,925 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37773,1689099407650 already deleted, retry=false 2023-07-11 18:17:15,925 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37773,1689099407650 expired; onlineServers=2 2023-07-11 18:17:15,932 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f137a5228ac24f86981374e80ce4a16c 2023-07-11 18:17:15,933 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/m/f137a5228ac24f86981374e80ce4a16c, entries=22, sequenceid=107, filesize=5.9 K 2023-07-11 18:17:15,934 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.08 KB/22614, heapSize ~36.52 KB/37400, currentSize=0 B/0 for b41d0021b1b281d3ab8046d2e4311514 in 71ms, sequenceid=107, compaction requested=true 2023-07-11 18:17:15,955 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/rsgroup/b41d0021b1b281d3ab8046d2e4311514/recovered.edits/110.seqid, newMaxSeqId=110, maxSeqId=35 2023-07-11 18:17:15,956 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 18:17:15,956 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:17:15,956 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b41d0021b1b281d3ab8046d2e4311514: 2023-07-11 18:17:15,956 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689099410637.b41d0021b1b281d3ab8046d2e4311514. 2023-07-11 18:17:15,957 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ddeec04b60fd5a8c2d4719765d0b2735, disabling compactions & flushes 2023-07-11 18:17:15,957 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735. 2023-07-11 18:17:15,957 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735. 2023-07-11 18:17:15,957 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735. after waiting 0 ms 2023-07-11 18:17:15,957 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735. 2023-07-11 18:17:15,957 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing ddeec04b60fd5a8c2d4719765d0b2735 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-11 18:17:15,965 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=868 B at sequenceid=216 (bloomFilter=false), to=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/.tmp/rep_barrier/9768d2c0614742b98ca1a8dca7bbbde6 2023-07-11 18:17:15,972 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9768d2c0614742b98ca1a8dca7bbbde6 2023-07-11 18:17:15,986 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/namespace/ddeec04b60fd5a8c2d4719765d0b2735/.tmp/info/3d5e54ebc3f542fb8fa931dd3185fdc4 2023-07-11 18:17:15,994 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.07 KB at sequenceid=216 (bloomFilter=false), to=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/.tmp/table/2655c0bc1c374d5ab56ece1da4a5ba40 2023-07-11 18:17:15,997 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/namespace/ddeec04b60fd5a8c2d4719765d0b2735/.tmp/info/3d5e54ebc3f542fb8fa931dd3185fdc4 as hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/namespace/ddeec04b60fd5a8c2d4719765d0b2735/info/3d5e54ebc3f542fb8fa931dd3185fdc4 2023-07-11 18:17:16,000 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2655c0bc1c374d5ab56ece1da4a5ba40 2023-07-11 18:17:16,001 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/.tmp/info/72cf6f8bdc3b44009789206536ff285f as hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/info/72cf6f8bdc3b44009789206536ff285f 2023-07-11 18:17:16,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/namespace/ddeec04b60fd5a8c2d4719765d0b2735/info/3d5e54ebc3f542fb8fa931dd3185fdc4, entries=2, sequenceid=6, filesize=4.8 K 2023-07-11 18:17:16,005 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for ddeec04b60fd5a8c2d4719765d0b2735 in 48ms, sequenceid=6, compaction requested=false 2023-07-11 18:17:16,009 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 72cf6f8bdc3b44009789206536ff285f 2023-07-11 18:17:16,009 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/info/72cf6f8bdc3b44009789206536ff285f, entries=62, sequenceid=216, filesize=11.8 K 2023-07-11 18:17:16,010 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/.tmp/rep_barrier/9768d2c0614742b98ca1a8dca7bbbde6 as hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/rep_barrier/9768d2c0614742b98ca1a8dca7bbbde6 2023-07-11 18:17:16,012 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/namespace/ddeec04b60fd5a8c2d4719765d0b2735/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-11 18:17:16,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735. 2023-07-11 18:17:16,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ddeec04b60fd5a8c2d4719765d0b2735: 2023-07-11 18:17:16,013 INFO [RS:3;jenkins-hbase4:37889] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37889,1689099411612; zookeeper connection closed. 2023-07-11 18:17:16,013 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689099410410.ddeec04b60fd5a8c2d4719765d0b2735. 2023-07-11 18:17:16,013 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:37889-0x101559a084d000b, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:16,014 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:37889-0x101559a084d000b, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:16,014 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@592e8cc6] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@592e8cc6 2023-07-11 18:17:16,017 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:16,017 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:37773-0x101559a084d0002, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:16,017 INFO [RS:1;jenkins-hbase4:37773] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37773,1689099407650; zookeeper connection closed. 2023-07-11 18:17:16,017 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@10fa5e51] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@10fa5e51 2023-07-11 18:17:16,018 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9768d2c0614742b98ca1a8dca7bbbde6 2023-07-11 18:17:16,018 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/rep_barrier/9768d2c0614742b98ca1a8dca7bbbde6, entries=8, sequenceid=216, filesize=5.8 K 2023-07-11 18:17:16,019 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/.tmp/table/2655c0bc1c374d5ab56ece1da4a5ba40 as hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/table/2655c0bc1c374d5ab56ece1da4a5ba40 2023-07-11 18:17:16,025 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 2655c0bc1c374d5ab56ece1da4a5ba40 2023-07-11 18:17:16,026 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/table/2655c0bc1c374d5ab56ece1da4a5ba40, entries=16, sequenceid=216, filesize=6.0 K 2023-07-11 18:17:16,027 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~37.46 KB/38356, heapSize ~61.05 KB/62512, currentSize=0 B/0 for 1588230740 in 172ms, sequenceid=216, compaction requested=true 2023-07-11 18:17:16,039 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/data/hbase/meta/1588230740/recovered.edits/219.seqid, newMaxSeqId=219, maxSeqId=104 2023-07-11 18:17:16,040 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 18:17:16,041 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-11 18:17:16,041 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-11 18:17:16,041 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-11 18:17:16,053 INFO [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45821,1689099407865; all regions closed. 2023-07-11 18:17:16,054 INFO [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45471,1689099407428; all regions closed. 2023-07-11 18:17:16,067 DEBUG [RS:2;jenkins-hbase4:45821] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/oldWALs 2023-07-11 18:17:16,067 INFO [RS:2;jenkins-hbase4:45821] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45821%2C1689099407865.meta:.meta(num 1689099419688) 2023-07-11 18:17:16,068 DEBUG [RS:0;jenkins-hbase4:45471] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/oldWALs 2023-07-11 18:17:16,068 INFO [RS:0;jenkins-hbase4:45471] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45471%2C1689099407428.meta:.meta(num 1689099412876) 2023-07-11 18:17:16,075 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/WALs/jenkins-hbase4.apache.org,45471,1689099407428/jenkins-hbase4.apache.org%2C45471%2C1689099407428.1689099409969 not finished, retry = 0 2023-07-11 18:17:16,076 DEBUG [RS:2;jenkins-hbase4:45821] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/oldWALs 2023-07-11 18:17:16,076 INFO [RS:2;jenkins-hbase4:45821] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45821%2C1689099407865:(num 1689099409968) 2023-07-11 18:17:16,076 DEBUG [RS:2;jenkins-hbase4:45821] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:16,076 INFO [RS:2;jenkins-hbase4:45821] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:16,076 INFO [RS:2;jenkins-hbase4:45821] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-11 18:17:16,077 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 18:17:16,077 INFO [RS:2;jenkins-hbase4:45821] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45821 2023-07-11 18:17:16,079 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:16,079 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:16,079 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45821,1689099407865 2023-07-11 18:17:16,081 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45821,1689099407865] 2023-07-11 18:17:16,081 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45821,1689099407865; numProcessing=3 2023-07-11 18:17:16,178 DEBUG [RS:0;jenkins-hbase4:45471] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/oldWALs 2023-07-11 18:17:16,178 INFO [RS:0;jenkins-hbase4:45471] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45471%2C1689099407428:(num 1689099409969) 2023-07-11 18:17:16,178 DEBUG [RS:0;jenkins-hbase4:45471] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:16,178 INFO [RS:0;jenkins-hbase4:45471] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:16,179 INFO [RS:0;jenkins-hbase4:45471] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-11 18:17:16,179 INFO [RS:0;jenkins-hbase4:45471] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 18:17:16,179 INFO [RS:0;jenkins-hbase4:45471] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 18:17:16,179 INFO [RS:0;jenkins-hbase4:45471] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 18:17:16,179 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 18:17:16,180 INFO [RS:0;jenkins-hbase4:45471] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45471 2023-07-11 18:17:16,181 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:16,181 INFO [RS:2;jenkins-hbase4:45821] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45821,1689099407865; zookeeper connection closed. 2023-07-11 18:17:16,181 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45821-0x101559a084d0003, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:16,182 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7e60365e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7e60365e 2023-07-11 18:17:16,185 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45821,1689099407865 already deleted, retry=false 2023-07-11 18:17:16,185 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45821,1689099407865 expired; onlineServers=1 2023-07-11 18:17:16,186 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:16,186 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45471,1689099407428 2023-07-11 18:17:16,187 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45471,1689099407428] 2023-07-11 18:17:16,187 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45471,1689099407428; numProcessing=4 2023-07-11 18:17:16,188 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45471,1689099407428 already deleted, retry=false 2023-07-11 18:17:16,188 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45471,1689099407428 expired; onlineServers=0 2023-07-11 18:17:16,188 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45397,1689099405546' ***** 2023-07-11 18:17:16,188 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-11 18:17:16,189 DEBUG [M:0;jenkins-hbase4:45397] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@166c22af, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-11 18:17:16,189 INFO [M:0;jenkins-hbase4:45397] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 18:17:16,192 INFO [M:0;jenkins-hbase4:45397] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@9ca6b1f{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-11 18:17:16,193 INFO [M:0;jenkins-hbase4:45397] server.AbstractConnector(383): Stopped ServerConnector@668fa014{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 18:17:16,193 INFO [M:0;jenkins-hbase4:45397] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 18:17:16,194 INFO [M:0;jenkins-hbase4:45397] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5310d071{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 18:17:16,194 INFO [M:0;jenkins-hbase4:45397] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@966e0ef{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/hadoop.log.dir/,STOPPED} 2023-07-11 18:17:16,195 INFO [M:0;jenkins-hbase4:45397] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45397,1689099405546 2023-07-11 18:17:16,195 INFO [M:0;jenkins-hbase4:45397] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45397,1689099405546; all regions closed. 2023-07-11 18:17:16,195 DEBUG [M:0;jenkins-hbase4:45397] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:16,195 INFO [M:0;jenkins-hbase4:45397] master.HMaster(1491): Stopping master jetty server 2023-07-11 18:17:16,196 INFO [M:0;jenkins-hbase4:45397] server.AbstractConnector(383): Stopped ServerConnector@18359baf{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 18:17:16,197 DEBUG [M:0;jenkins-hbase4:45397] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-11 18:17:16,197 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-11 18:17:16,197 DEBUG [M:0;jenkins-hbase4:45397] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-11 18:17:16,197 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689099409512] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689099409512,5,FailOnTimeoutGroup] 2023-07-11 18:17:16,197 INFO [M:0;jenkins-hbase4:45397] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-11 18:17:16,197 INFO [M:0;jenkins-hbase4:45397] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-11 18:17:16,197 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689099409511] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689099409511,5,FailOnTimeoutGroup] 2023-07-11 18:17:16,197 INFO [M:0;jenkins-hbase4:45397] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-11 18:17:16,197 DEBUG [M:0;jenkins-hbase4:45397] master.HMaster(1512): Stopping service threads 2023-07-11 18:17:16,197 INFO [M:0;jenkins-hbase4:45397] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-11 18:17:16,198 ERROR [M:0;jenkins-hbase4:45397] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-11 18:17:16,199 INFO [M:0;jenkins-hbase4:45397] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-11 18:17:16,199 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-11 18:17:16,288 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:16,288 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): regionserver:45471-0x101559a084d0001, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:16,288 INFO [RS:0;jenkins-hbase4:45471] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45471,1689099407428; zookeeper connection closed. 2023-07-11 18:17:16,289 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@518bbaf3] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@518bbaf3 2023-07-11 18:17:16,289 INFO [Listener at localhost/35107] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-11 18:17:16,291 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-11 18:17:16,292 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:16,292 INFO [M:0;jenkins-hbase4:45397] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-11 18:17:16,292 INFO [M:0;jenkins-hbase4:45397] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-11 18:17:16,292 DEBUG [M:0;jenkins-hbase4:45397] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-11 18:17:16,292 INFO [M:0;jenkins-hbase4:45397] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:16,292 DEBUG [M:0;jenkins-hbase4:45397] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:16,292 DEBUG [M:0;jenkins-hbase4:45397] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-11 18:17:16,292 DEBUG [M:0;jenkins-hbase4:45397] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:16,292 INFO [M:0;jenkins-hbase4:45397] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=528.60 KB heapSize=632.80 KB 2023-07-11 18:17:16,293 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/master already deleted, retry=false 2023-07-11 18:17:16,293 DEBUG [RegionServerTracker-0] master.ActiveMasterManager(335): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Failed delete of our master address node; KeeperErrorCode = NoNode for /hbase/master 2023-07-11 18:17:16,293 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 18:17:16,309 INFO [M:0;jenkins-hbase4:45397] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=528.60 KB at sequenceid=1176 (bloomFilter=true), to=hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/8a329712c27648fcb2f8602ab435e407 2023-07-11 18:17:16,315 DEBUG [M:0;jenkins-hbase4:45397] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/8a329712c27648fcb2f8602ab435e407 as hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/8a329712c27648fcb2f8602ab435e407 2023-07-11 18:17:16,319 INFO [M:0;jenkins-hbase4:45397] regionserver.HStore(1080): Added hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/8a329712c27648fcb2f8602ab435e407, entries=157, sequenceid=1176, filesize=27.6 K 2023-07-11 18:17:16,320 INFO [M:0;jenkins-hbase4:45397] regionserver.HRegion(2948): Finished flush of dataSize ~528.60 KB/541283, heapSize ~632.79 KB/647976, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 28ms, sequenceid=1176, compaction requested=false 2023-07-11 18:17:16,322 INFO [M:0;jenkins-hbase4:45397] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:16,322 DEBUG [M:0;jenkins-hbase4:45397] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-11 18:17:16,324 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/MasterData/WALs/jenkins-hbase4.apache.org,45397,1689099405546/jenkins-hbase4.apache.org%2C45397%2C1689099405546.1689099408625 not finished, retry = 0 2023-07-11 18:17:16,426 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 18:17:16,426 INFO [M:0;jenkins-hbase4:45397] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-11 18:17:16,427 INFO [M:0;jenkins-hbase4:45397] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45397 2023-07-11 18:17:16,428 DEBUG [M:0;jenkins-hbase4:45397] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,45397,1689099405546 already deleted, retry=false 2023-07-11 18:17:16,550 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-11 18:17:16,615 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:16,615 INFO [M:0;jenkins-hbase4:45397] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45397,1689099405546; zookeeper connection closed. 2023-07-11 18:17:16,615 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): master:45397-0x101559a084d0000, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:16,617 WARN [Listener at localhost/35107] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-11 18:17:16,622 INFO [Listener at localhost/35107] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 18:17:16,726 WARN [BP-629278552-172.31.14.131-1689099401665 heartbeating to localhost/127.0.0.1:40365] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-11 18:17:16,726 WARN [BP-629278552-172.31.14.131-1689099401665 heartbeating to localhost/127.0.0.1:40365] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-629278552-172.31.14.131-1689099401665 (Datanode Uuid ee9d9c6a-f45b-4904-8c1c-d550ecb59c3b) service to localhost/127.0.0.1:40365 2023-07-11 18:17:16,728 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/cluster_66fe2a0c-baee-fc58-4092-355e2e16c678/dfs/data/data5/current/BP-629278552-172.31.14.131-1689099401665] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 18:17:16,728 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/cluster_66fe2a0c-baee-fc58-4092-355e2e16c678/dfs/data/data6/current/BP-629278552-172.31.14.131-1689099401665] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 18:17:16,730 WARN [Listener at localhost/35107] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-11 18:17:16,736 INFO [Listener at localhost/35107] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 18:17:16,839 WARN [BP-629278552-172.31.14.131-1689099401665 heartbeating to localhost/127.0.0.1:40365] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-11 18:17:16,839 WARN [BP-629278552-172.31.14.131-1689099401665 heartbeating to localhost/127.0.0.1:40365] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-629278552-172.31.14.131-1689099401665 (Datanode Uuid 8ed6104c-4154-4c33-89e4-0b32f1a01437) service to localhost/127.0.0.1:40365 2023-07-11 18:17:16,840 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/cluster_66fe2a0c-baee-fc58-4092-355e2e16c678/dfs/data/data3/current/BP-629278552-172.31.14.131-1689099401665] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 18:17:16,840 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/cluster_66fe2a0c-baee-fc58-4092-355e2e16c678/dfs/data/data4/current/BP-629278552-172.31.14.131-1689099401665] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 18:17:16,842 WARN [Listener at localhost/35107] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-11 18:17:16,846 INFO [Listener at localhost/35107] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 18:17:16,949 WARN [BP-629278552-172.31.14.131-1689099401665 heartbeating to localhost/127.0.0.1:40365] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-11 18:17:16,949 WARN [BP-629278552-172.31.14.131-1689099401665 heartbeating to localhost/127.0.0.1:40365] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-629278552-172.31.14.131-1689099401665 (Datanode Uuid 0eb601d2-e270-46a9-89e4-2f65bf74c392) service to localhost/127.0.0.1:40365 2023-07-11 18:17:16,950 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/cluster_66fe2a0c-baee-fc58-4092-355e2e16c678/dfs/data/data1/current/BP-629278552-172.31.14.131-1689099401665] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 18:17:16,950 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/cluster_66fe2a0c-baee-fc58-4092-355e2e16c678/dfs/data/data2/current/BP-629278552-172.31.14.131-1689099401665] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 18:17:16,979 INFO [Listener at localhost/35107] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 18:17:17,100 INFO [Listener at localhost/35107] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-11 18:17:17,161 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-11 18:17:17,161 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-11 18:17:17,161 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/hadoop.log.dir so I do NOT create it in target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b 2023-07-11 18:17:17,162 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/09a9ec68-73a5-cf2a-7953-b00ddc201b2b/hadoop.tmp.dir so I do NOT create it in target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b 2023-07-11 18:17:17,162 INFO [Listener at localhost/35107] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/cluster_56a3dce2-f7b5-2cfe-4fd6-13f560ee9feb, deleteOnExit=true 2023-07-11 18:17:17,162 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-11 18:17:17,162 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/test.cache.data in system properties and HBase conf 2023-07-11 18:17:17,162 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/hadoop.tmp.dir in system properties and HBase conf 2023-07-11 18:17:17,163 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/hadoop.log.dir in system properties and HBase conf 2023-07-11 18:17:17,163 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-11 18:17:17,163 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-11 18:17:17,163 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-11 18:17:17,163 DEBUG [Listener at localhost/35107] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-11 18:17:17,163 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-11 18:17:17,164 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-11 18:17:17,164 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-11 18:17:17,164 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-11 18:17:17,164 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-11 18:17:17,164 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-11 18:17:17,164 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-11 18:17:17,164 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-11 18:17:17,164 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-11 18:17:17,164 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/nfs.dump.dir in system properties and HBase conf 2023-07-11 18:17:17,165 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/java.io.tmpdir in system properties and HBase conf 2023-07-11 18:17:17,165 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-11 18:17:17,165 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-11 18:17:17,165 INFO [Listener at localhost/35107] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-11 18:17:17,171 WARN [Listener at localhost/35107] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-11 18:17:17,172 WARN [Listener at localhost/35107] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-11 18:17:17,197 DEBUG [Listener at localhost/35107-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101559a084d000a, quorum=127.0.0.1:58592, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-11 18:17:17,197 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101559a084d000a, quorum=127.0.0.1:58592, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-11 18:17:17,223 WARN [Listener at localhost/35107] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 18:17:17,225 INFO [Listener at localhost/35107] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 18:17:17,229 INFO [Listener at localhost/35107] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/java.io.tmpdir/Jetty_localhost_35659_hdfs____mofexu/webapp 2023-07-11 18:17:17,320 INFO [Listener at localhost/35107] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35659 2023-07-11 18:17:17,327 WARN [Listener at localhost/35107] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-11 18:17:17,327 WARN [Listener at localhost/35107] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-11 18:17:17,380 WARN [Listener at localhost/33083] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 18:17:17,400 WARN [Listener at localhost/33083] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-11 18:17:17,403 WARN [Listener at localhost/33083] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 18:17:17,404 INFO [Listener at localhost/33083] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 18:17:17,409 INFO [Listener at localhost/33083] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/java.io.tmpdir/Jetty_localhost_38141_datanode____.j4dcj4/webapp 2023-07-11 18:17:17,503 INFO [Listener at localhost/33083] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:38141 2023-07-11 18:17:17,510 WARN [Listener at localhost/33011] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 18:17:17,531 WARN [Listener at localhost/33011] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-11 18:17:17,533 WARN [Listener at localhost/33011] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 18:17:17,535 INFO [Listener at localhost/33011] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 18:17:17,539 INFO [Listener at localhost/33011] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/java.io.tmpdir/Jetty_localhost_45513_datanode____1mutwd/webapp 2023-07-11 18:17:17,632 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7f11adb5451dcc6e: Processing first storage report for DS-f9808bf6-6b71-4aae-a0f3-a17c8f8be178 from datanode 628cb278-a110-45f6-8968-50d98a90d79b 2023-07-11 18:17:17,633 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7f11adb5451dcc6e: from storage DS-f9808bf6-6b71-4aae-a0f3-a17c8f8be178 node DatanodeRegistration(127.0.0.1:33959, datanodeUuid=628cb278-a110-45f6-8968-50d98a90d79b, infoPort=41263, infoSecurePort=0, ipcPort=33011, storageInfo=lv=-57;cid=testClusterID;nsid=893969982;c=1689099437175), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-11 18:17:17,633 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7f11adb5451dcc6e: Processing first storage report for DS-e3124525-9bc0-4e42-9d8d-b8ab69a1bf95 from datanode 628cb278-a110-45f6-8968-50d98a90d79b 2023-07-11 18:17:17,633 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7f11adb5451dcc6e: from storage DS-e3124525-9bc0-4e42-9d8d-b8ab69a1bf95 node DatanodeRegistration(127.0.0.1:33959, datanodeUuid=628cb278-a110-45f6-8968-50d98a90d79b, infoPort=41263, infoSecurePort=0, ipcPort=33011, storageInfo=lv=-57;cid=testClusterID;nsid=893969982;c=1689099437175), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 18:17:17,660 INFO [Listener at localhost/33011] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45513 2023-07-11 18:17:17,669 WARN [Listener at localhost/44203] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 18:17:17,687 WARN [Listener at localhost/44203] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-11 18:17:17,690 WARN [Listener at localhost/44203] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 18:17:17,692 INFO [Listener at localhost/44203] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 18:17:17,696 INFO [Listener at localhost/44203] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/java.io.tmpdir/Jetty_localhost_40765_datanode____.stp5un/webapp 2023-07-11 18:17:17,792 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1275c26878ffcf52: Processing first storage report for DS-2a2e16c4-b302-4bf8-8149-5567b0ab4f53 from datanode 79ed39e4-9e95-4d5e-9b06-a0c1a820c28d 2023-07-11 18:17:17,792 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1275c26878ffcf52: from storage DS-2a2e16c4-b302-4bf8-8149-5567b0ab4f53 node DatanodeRegistration(127.0.0.1:46655, datanodeUuid=79ed39e4-9e95-4d5e-9b06-a0c1a820c28d, infoPort=34971, infoSecurePort=0, ipcPort=44203, storageInfo=lv=-57;cid=testClusterID;nsid=893969982;c=1689099437175), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 18:17:17,792 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1275c26878ffcf52: Processing first storage report for DS-00935e99-4978-41c7-a000-40e6e96bc645 from datanode 79ed39e4-9e95-4d5e-9b06-a0c1a820c28d 2023-07-11 18:17:17,792 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1275c26878ffcf52: from storage DS-00935e99-4978-41c7-a000-40e6e96bc645 node DatanodeRegistration(127.0.0.1:46655, datanodeUuid=79ed39e4-9e95-4d5e-9b06-a0c1a820c28d, infoPort=34971, infoSecurePort=0, ipcPort=44203, storageInfo=lv=-57;cid=testClusterID;nsid=893969982;c=1689099437175), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 18:17:17,810 INFO [Listener at localhost/44203] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40765 2023-07-11 18:17:17,819 WARN [Listener at localhost/45067] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 18:17:17,924 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7b20f11ac2839437: Processing first storage report for DS-363be66f-c727-42df-80da-16ce57120d9c from datanode 19532065-78a7-45e7-b1e1-ad3572ab15d7 2023-07-11 18:17:17,924 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7b20f11ac2839437: from storage DS-363be66f-c727-42df-80da-16ce57120d9c node DatanodeRegistration(127.0.0.1:46605, datanodeUuid=19532065-78a7-45e7-b1e1-ad3572ab15d7, infoPort=45609, infoSecurePort=0, ipcPort=45067, storageInfo=lv=-57;cid=testClusterID;nsid=893969982;c=1689099437175), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 18:17:17,924 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7b20f11ac2839437: Processing first storage report for DS-ed1d19b3-a5da-4566-8f36-6c5d4d34cc76 from datanode 19532065-78a7-45e7-b1e1-ad3572ab15d7 2023-07-11 18:17:17,924 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7b20f11ac2839437: from storage DS-ed1d19b3-a5da-4566-8f36-6c5d4d34cc76 node DatanodeRegistration(127.0.0.1:46605, datanodeUuid=19532065-78a7-45e7-b1e1-ad3572ab15d7, infoPort=45609, infoSecurePort=0, ipcPort=45067, storageInfo=lv=-57;cid=testClusterID;nsid=893969982;c=1689099437175), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 18:17:17,931 DEBUG [Listener at localhost/45067] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b 2023-07-11 18:17:17,934 INFO [Listener at localhost/45067] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/cluster_56a3dce2-f7b5-2cfe-4fd6-13f560ee9feb/zookeeper_0, clientPort=51347, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/cluster_56a3dce2-f7b5-2cfe-4fd6-13f560ee9feb/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/cluster_56a3dce2-f7b5-2cfe-4fd6-13f560ee9feb/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-11 18:17:17,935 INFO [Listener at localhost/45067] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=51347 2023-07-11 18:17:17,936 INFO [Listener at localhost/45067] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:17,937 INFO [Listener at localhost/45067] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:17,958 INFO [Listener at localhost/45067] util.FSUtils(471): Created version file at hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c with version=8 2023-07-11 18:17:17,959 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/hbase-staging 2023-07-11 18:17:17,960 DEBUG [Listener at localhost/45067] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-11 18:17:17,960 DEBUG [Listener at localhost/45067] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-11 18:17:17,960 DEBUG [Listener at localhost/45067] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-11 18:17:17,960 DEBUG [Listener at localhost/45067] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-11 18:17:17,961 INFO [Listener at localhost/45067] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-11 18:17:17,961 INFO [Listener at localhost/45067] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:17,961 INFO [Listener at localhost/45067] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:17,961 INFO [Listener at localhost/45067] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 18:17:17,961 INFO [Listener at localhost/45067] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:17,961 INFO [Listener at localhost/45067] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 18:17:17,961 INFO [Listener at localhost/45067] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 18:17:17,962 INFO [Listener at localhost/45067] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40609 2023-07-11 18:17:17,963 INFO [Listener at localhost/45067] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:17,964 INFO [Listener at localhost/45067] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:17,965 INFO [Listener at localhost/45067] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40609 connecting to ZooKeeper ensemble=127.0.0.1:51347 2023-07-11 18:17:17,973 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:406090x0, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 18:17:17,973 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40609-0x101559a8a700000 connected 2023-07-11 18:17:17,998 DEBUG [Listener at localhost/45067] zookeeper.ZKUtil(164): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 18:17:17,998 DEBUG [Listener at localhost/45067] zookeeper.ZKUtil(164): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:17,999 DEBUG [Listener at localhost/45067] zookeeper.ZKUtil(164): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 18:17:18,002 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40609 2023-07-11 18:17:18,003 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40609 2023-07-11 18:17:18,003 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40609 2023-07-11 18:17:18,004 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40609 2023-07-11 18:17:18,004 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40609 2023-07-11 18:17:18,006 INFO [Listener at localhost/45067] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 18:17:18,006 INFO [Listener at localhost/45067] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 18:17:18,007 INFO [Listener at localhost/45067] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 18:17:18,007 INFO [Listener at localhost/45067] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-11 18:17:18,007 INFO [Listener at localhost/45067] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 18:17:18,007 INFO [Listener at localhost/45067] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 18:17:18,007 INFO [Listener at localhost/45067] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 18:17:18,008 INFO [Listener at localhost/45067] http.HttpServer(1146): Jetty bound to port 35359 2023-07-11 18:17:18,008 INFO [Listener at localhost/45067] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 18:17:18,015 INFO [Listener at localhost/45067] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:18,016 INFO [Listener at localhost/45067] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@625ee407{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/hadoop.log.dir/,AVAILABLE} 2023-07-11 18:17:18,016 INFO [Listener at localhost/45067] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:18,016 INFO [Listener at localhost/45067] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@9246e58{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 18:17:18,132 INFO [Listener at localhost/45067] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 18:17:18,134 INFO [Listener at localhost/45067] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 18:17:18,134 INFO [Listener at localhost/45067] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 18:17:18,134 INFO [Listener at localhost/45067] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-11 18:17:18,135 INFO [Listener at localhost/45067] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:18,136 INFO [Listener at localhost/45067] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3b168a60{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/java.io.tmpdir/jetty-0_0_0_0-35359-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8612214434286778311/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-11 18:17:18,137 INFO [Listener at localhost/45067] server.AbstractConnector(333): Started ServerConnector@37863041{HTTP/1.1, (http/1.1)}{0.0.0.0:35359} 2023-07-11 18:17:18,138 INFO [Listener at localhost/45067] server.Server(415): Started @38487ms 2023-07-11 18:17:18,138 INFO [Listener at localhost/45067] master.HMaster(444): hbase.rootdir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c, hbase.cluster.distributed=false 2023-07-11 18:17:18,153 INFO [Listener at localhost/45067] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-11 18:17:18,154 INFO [Listener at localhost/45067] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:18,154 INFO [Listener at localhost/45067] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:18,154 INFO [Listener at localhost/45067] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 18:17:18,154 INFO [Listener at localhost/45067] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:18,154 INFO [Listener at localhost/45067] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 18:17:18,154 INFO [Listener at localhost/45067] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 18:17:18,155 INFO [Listener at localhost/45067] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45273 2023-07-11 18:17:18,155 INFO [Listener at localhost/45067] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 18:17:18,157 DEBUG [Listener at localhost/45067] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 18:17:18,157 INFO [Listener at localhost/45067] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:18,158 INFO [Listener at localhost/45067] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:18,159 INFO [Listener at localhost/45067] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45273 connecting to ZooKeeper ensemble=127.0.0.1:51347 2023-07-11 18:17:18,165 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:452730x0, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 18:17:18,166 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45273-0x101559a8a700001 connected 2023-07-11 18:17:18,166 DEBUG [Listener at localhost/45067] zookeeper.ZKUtil(164): regionserver:45273-0x101559a8a700001, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 18:17:18,167 DEBUG [Listener at localhost/45067] zookeeper.ZKUtil(164): regionserver:45273-0x101559a8a700001, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:18,167 DEBUG [Listener at localhost/45067] zookeeper.ZKUtil(164): regionserver:45273-0x101559a8a700001, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 18:17:18,168 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45273 2023-07-11 18:17:18,168 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45273 2023-07-11 18:17:18,169 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45273 2023-07-11 18:17:18,172 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45273 2023-07-11 18:17:18,174 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45273 2023-07-11 18:17:18,176 INFO [Listener at localhost/45067] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 18:17:18,176 INFO [Listener at localhost/45067] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 18:17:18,177 INFO [Listener at localhost/45067] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 18:17:18,177 INFO [Listener at localhost/45067] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 18:17:18,177 INFO [Listener at localhost/45067] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 18:17:18,177 INFO [Listener at localhost/45067] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 18:17:18,178 INFO [Listener at localhost/45067] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 18:17:18,179 INFO [Listener at localhost/45067] http.HttpServer(1146): Jetty bound to port 43901 2023-07-11 18:17:18,179 INFO [Listener at localhost/45067] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 18:17:18,180 INFO [Listener at localhost/45067] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:18,180 INFO [Listener at localhost/45067] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@204cfa25{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/hadoop.log.dir/,AVAILABLE} 2023-07-11 18:17:18,180 INFO [Listener at localhost/45067] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:18,181 INFO [Listener at localhost/45067] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@14ee43e5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 18:17:18,301 INFO [Listener at localhost/45067] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 18:17:18,302 INFO [Listener at localhost/45067] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 18:17:18,302 INFO [Listener at localhost/45067] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 18:17:18,302 INFO [Listener at localhost/45067] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-11 18:17:18,303 INFO [Listener at localhost/45067] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:18,304 INFO [Listener at localhost/45067] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@73f692b6{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/java.io.tmpdir/jetty-0_0_0_0-43901-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1392047335293333432/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:17:18,306 INFO [Listener at localhost/45067] server.AbstractConnector(333): Started ServerConnector@5230b41e{HTTP/1.1, (http/1.1)}{0.0.0.0:43901} 2023-07-11 18:17:18,306 INFO [Listener at localhost/45067] server.Server(415): Started @38656ms 2023-07-11 18:17:18,321 INFO [Listener at localhost/45067] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-11 18:17:18,321 INFO [Listener at localhost/45067] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:18,321 INFO [Listener at localhost/45067] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:18,321 INFO [Listener at localhost/45067] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 18:17:18,321 INFO [Listener at localhost/45067] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:18,321 INFO [Listener at localhost/45067] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 18:17:18,321 INFO [Listener at localhost/45067] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 18:17:18,322 INFO [Listener at localhost/45067] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39037 2023-07-11 18:17:18,323 INFO [Listener at localhost/45067] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 18:17:18,324 DEBUG [Listener at localhost/45067] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 18:17:18,324 INFO [Listener at localhost/45067] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:18,326 INFO [Listener at localhost/45067] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:18,327 INFO [Listener at localhost/45067] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39037 connecting to ZooKeeper ensemble=127.0.0.1:51347 2023-07-11 18:17:18,332 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:390370x0, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 18:17:18,333 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39037-0x101559a8a700002 connected 2023-07-11 18:17:18,333 DEBUG [Listener at localhost/45067] zookeeper.ZKUtil(164): regionserver:39037-0x101559a8a700002, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 18:17:18,333 DEBUG [Listener at localhost/45067] zookeeper.ZKUtil(164): regionserver:39037-0x101559a8a700002, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:18,334 DEBUG [Listener at localhost/45067] zookeeper.ZKUtil(164): regionserver:39037-0x101559a8a700002, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 18:17:18,337 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39037 2023-07-11 18:17:18,337 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39037 2023-07-11 18:17:18,338 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39037 2023-07-11 18:17:18,339 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39037 2023-07-11 18:17:18,339 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39037 2023-07-11 18:17:18,341 INFO [Listener at localhost/45067] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 18:17:18,341 INFO [Listener at localhost/45067] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 18:17:18,342 INFO [Listener at localhost/45067] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 18:17:18,342 INFO [Listener at localhost/45067] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 18:17:18,343 INFO [Listener at localhost/45067] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 18:17:18,343 INFO [Listener at localhost/45067] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 18:17:18,343 INFO [Listener at localhost/45067] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 18:17:18,344 INFO [Listener at localhost/45067] http.HttpServer(1146): Jetty bound to port 33957 2023-07-11 18:17:18,344 INFO [Listener at localhost/45067] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 18:17:18,348 INFO [Listener at localhost/45067] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:18,348 INFO [Listener at localhost/45067] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@321cd692{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/hadoop.log.dir/,AVAILABLE} 2023-07-11 18:17:18,349 INFO [Listener at localhost/45067] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:18,349 INFO [Listener at localhost/45067] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@398f58e0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 18:17:18,481 INFO [Listener at localhost/45067] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 18:17:18,482 INFO [Listener at localhost/45067] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 18:17:18,482 INFO [Listener at localhost/45067] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 18:17:18,483 INFO [Listener at localhost/45067] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-11 18:17:18,483 INFO [Listener at localhost/45067] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:18,484 INFO [Listener at localhost/45067] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@31ec606e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/java.io.tmpdir/jetty-0_0_0_0-33957-hbase-server-2_4_18-SNAPSHOT_jar-_-any-246657543628518228/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:17:18,486 INFO [Listener at localhost/45067] server.AbstractConnector(333): Started ServerConnector@f5d6833{HTTP/1.1, (http/1.1)}{0.0.0.0:33957} 2023-07-11 18:17:18,486 INFO [Listener at localhost/45067] server.Server(415): Started @38836ms 2023-07-11 18:17:18,498 INFO [Listener at localhost/45067] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-11 18:17:18,498 INFO [Listener at localhost/45067] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:18,498 INFO [Listener at localhost/45067] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:18,498 INFO [Listener at localhost/45067] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 18:17:18,498 INFO [Listener at localhost/45067] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:18,499 INFO [Listener at localhost/45067] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 18:17:18,499 INFO [Listener at localhost/45067] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 18:17:18,499 INFO [Listener at localhost/45067] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42269 2023-07-11 18:17:18,500 INFO [Listener at localhost/45067] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 18:17:18,502 DEBUG [Listener at localhost/45067] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 18:17:18,502 INFO [Listener at localhost/45067] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:18,504 INFO [Listener at localhost/45067] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:18,506 INFO [Listener at localhost/45067] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42269 connecting to ZooKeeper ensemble=127.0.0.1:51347 2023-07-11 18:17:18,510 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:422690x0, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 18:17:18,511 DEBUG [Listener at localhost/45067] zookeeper.ZKUtil(164): regionserver:422690x0, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 18:17:18,512 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42269-0x101559a8a700003 connected 2023-07-11 18:17:18,512 DEBUG [Listener at localhost/45067] zookeeper.ZKUtil(164): regionserver:42269-0x101559a8a700003, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:18,513 DEBUG [Listener at localhost/45067] zookeeper.ZKUtil(164): regionserver:42269-0x101559a8a700003, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 18:17:18,513 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42269 2023-07-11 18:17:18,513 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42269 2023-07-11 18:17:18,514 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42269 2023-07-11 18:17:18,515 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42269 2023-07-11 18:17:18,517 DEBUG [Listener at localhost/45067] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42269 2023-07-11 18:17:18,519 INFO [Listener at localhost/45067] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 18:17:18,519 INFO [Listener at localhost/45067] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 18:17:18,519 INFO [Listener at localhost/45067] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 18:17:18,520 INFO [Listener at localhost/45067] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 18:17:18,520 INFO [Listener at localhost/45067] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 18:17:18,520 INFO [Listener at localhost/45067] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 18:17:18,520 INFO [Listener at localhost/45067] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 18:17:18,521 INFO [Listener at localhost/45067] http.HttpServer(1146): Jetty bound to port 37123 2023-07-11 18:17:18,521 INFO [Listener at localhost/45067] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 18:17:18,527 INFO [Listener at localhost/45067] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:18,527 INFO [Listener at localhost/45067] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2f728d8a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/hadoop.log.dir/,AVAILABLE} 2023-07-11 18:17:18,528 INFO [Listener at localhost/45067] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:18,528 INFO [Listener at localhost/45067] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@65d00383{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 18:17:18,649 INFO [Listener at localhost/45067] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 18:17:18,650 INFO [Listener at localhost/45067] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 18:17:18,650 INFO [Listener at localhost/45067] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 18:17:18,651 INFO [Listener at localhost/45067] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-11 18:17:18,653 INFO [Listener at localhost/45067] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:18,654 INFO [Listener at localhost/45067] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6a1b9d4{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/java.io.tmpdir/jetty-0_0_0_0-37123-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5730904064681076809/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:17:18,656 INFO [Listener at localhost/45067] server.AbstractConnector(333): Started ServerConnector@5c04004{HTTP/1.1, (http/1.1)}{0.0.0.0:37123} 2023-07-11 18:17:18,656 INFO [Listener at localhost/45067] server.Server(415): Started @39006ms 2023-07-11 18:17:18,659 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 18:17:18,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@57910c14{HTTP/1.1, (http/1.1)}{0.0.0.0:35883} 2023-07-11 18:17:18,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @39014ms 2023-07-11 18:17:18,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,40609,1689099437960 2023-07-11 18:17:18,665 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-11 18:17:18,666 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,40609,1689099437960 2023-07-11 18:17:18,667 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:45273-0x101559a8a700001, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 18:17:18,667 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:39037-0x101559a8a700002, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 18:17:18,668 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 18:17:18,669 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:18,667 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:42269-0x101559a8a700003, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 18:17:18,670 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-11 18:17:18,671 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,40609,1689099437960 from backup master directory 2023-07-11 18:17:18,671 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-11 18:17:18,672 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,40609,1689099437960 2023-07-11 18:17:18,672 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-11 18:17:18,672 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 18:17:18,672 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,40609,1689099437960 2023-07-11 18:17:18,695 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/hbase.id with ID: 572a48e4-c49b-485b-b798-22f0144a56d9 2023-07-11 18:17:18,708 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:18,712 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:18,735 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x0d1faf66 to 127.0.0.1:51347 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:17:18,760 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@202030e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:17:18,761 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 18:17:18,761 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-11 18:17:18,762 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 18:17:18,765 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/MasterData/data/master/store-tmp 2023-07-11 18:17:19,202 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:19,203 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-11 18:17:19,203 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:19,203 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:19,203 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-11 18:17:19,203 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:19,203 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:19,203 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-11 18:17:19,203 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/MasterData/WALs/jenkins-hbase4.apache.org,40609,1689099437960 2023-07-11 18:17:19,206 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40609%2C1689099437960, suffix=, logDir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/MasterData/WALs/jenkins-hbase4.apache.org,40609,1689099437960, archiveDir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/MasterData/oldWALs, maxLogs=10 2023-07-11 18:17:19,222 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46605,DS-363be66f-c727-42df-80da-16ce57120d9c,DISK] 2023-07-11 18:17:19,223 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46655,DS-2a2e16c4-b302-4bf8-8149-5567b0ab4f53,DISK] 2023-07-11 18:17:19,223 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33959,DS-f9808bf6-6b71-4aae-a0f3-a17c8f8be178,DISK] 2023-07-11 18:17:19,226 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/MasterData/WALs/jenkins-hbase4.apache.org,40609,1689099437960/jenkins-hbase4.apache.org%2C40609%2C1689099437960.1689099439207 2023-07-11 18:17:19,226 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46605,DS-363be66f-c727-42df-80da-16ce57120d9c,DISK], DatanodeInfoWithStorage[127.0.0.1:33959,DS-f9808bf6-6b71-4aae-a0f3-a17c8f8be178,DISK], DatanodeInfoWithStorage[127.0.0.1:46655,DS-2a2e16c4-b302-4bf8-8149-5567b0ab4f53,DISK]] 2023-07-11 18:17:19,226 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:19,227 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:19,227 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-11 18:17:19,227 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-11 18:17:19,229 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-11 18:17:19,230 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-11 18:17:19,231 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-11 18:17:19,232 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:19,232 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-11 18:17:19,233 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-11 18:17:19,236 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-11 18:17:19,239 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:19,239 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9640444000, jitterRate=-0.10216368734836578}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:19,239 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-11 18:17:19,239 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-11 18:17:19,241 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-11 18:17:19,241 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-11 18:17:19,241 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-11 18:17:19,242 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-11 18:17:19,242 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-11 18:17:19,242 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-11 18:17:19,243 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-11 18:17:19,244 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-11 18:17:19,245 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-11 18:17:19,245 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-11 18:17:19,246 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-11 18:17:19,248 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:19,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-11 18:17:19,249 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-11 18:17:19,250 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-11 18:17:19,252 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:45273-0x101559a8a700001, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:19,252 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:39037-0x101559a8a700002, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:19,252 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:19,252 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:42269-0x101559a8a700003, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:19,252 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:19,253 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,40609,1689099437960, sessionid=0x101559a8a700000, setting cluster-up flag (Was=false) 2023-07-11 18:17:19,265 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:19,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-11 18:17:19,276 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40609,1689099437960 2023-07-11 18:17:19,281 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:19,292 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-11 18:17:19,293 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40609,1689099437960 2023-07-11 18:17:19,294 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.hbase-snapshot/.tmp 2023-07-11 18:17:19,296 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-11 18:17:19,296 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-11 18:17:19,297 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-11 18:17:19,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-11 18:17:19,298 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-11 18:17:19,298 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40609,1689099437960] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 18:17:19,301 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-11 18:17:19,314 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-11 18:17:19,314 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-11 18:17:19,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-11 18:17:19,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-11 18:17:19,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-11 18:17:19,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-11 18:17:19,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-11 18:17:19,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-11 18:17:19,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-11 18:17:19,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-11 18:17:19,315 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689099469327 2023-07-11 18:17:19,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-11 18:17:19,327 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-11 18:17:19,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-11 18:17:19,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-11 18:17:19,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-11 18:17:19,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-11 18:17:19,328 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,328 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-11 18:17:19,328 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-11 18:17:19,329 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-11 18:17:19,329 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-11 18:17:19,329 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-11 18:17:19,330 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-11 18:17:19,330 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-11 18:17:19,330 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689099439330,5,FailOnTimeoutGroup] 2023-07-11 18:17:19,330 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689099439330,5,FailOnTimeoutGroup] 2023-07-11 18:17:19,330 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,330 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-11 18:17:19,330 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,330 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-11 18:17:19,330 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,359 INFO [RS:0;jenkins-hbase4:45273] regionserver.HRegionServer(951): ClusterId : 572a48e4-c49b-485b-b798-22f0144a56d9 2023-07-11 18:17:19,369 DEBUG [RS:0;jenkins-hbase4:45273] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 18:17:19,370 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-11 18:17:19,371 INFO [RS:1;jenkins-hbase4:39037] regionserver.HRegionServer(951): ClusterId : 572a48e4-c49b-485b-b798-22f0144a56d9 2023-07-11 18:17:19,376 DEBUG [RS:1;jenkins-hbase4:39037] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 18:17:19,376 INFO [RS:2;jenkins-hbase4:42269] regionserver.HRegionServer(951): ClusterId : 572a48e4-c49b-485b-b798-22f0144a56d9 2023-07-11 18:17:19,376 DEBUG [RS:2;jenkins-hbase4:42269] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 18:17:19,376 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-11 18:17:19,376 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c 2023-07-11 18:17:19,379 DEBUG [RS:0;jenkins-hbase4:45273] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 18:17:19,379 DEBUG [RS:0;jenkins-hbase4:45273] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 18:17:19,382 DEBUG [RS:2;jenkins-hbase4:42269] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 18:17:19,382 DEBUG [RS:2;jenkins-hbase4:42269] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 18:17:19,382 DEBUG [RS:1;jenkins-hbase4:39037] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 18:17:19,382 DEBUG [RS:1;jenkins-hbase4:39037] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 18:17:19,382 DEBUG [RS:0;jenkins-hbase4:45273] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 18:17:19,385 DEBUG [RS:0;jenkins-hbase4:45273] zookeeper.ReadOnlyZKClient(139): Connect 0x702a8227 to 127.0.0.1:51347 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:17:19,388 DEBUG [RS:1;jenkins-hbase4:39037] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 18:17:19,389 DEBUG [RS:2;jenkins-hbase4:42269] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 18:17:19,395 DEBUG [RS:1;jenkins-hbase4:39037] zookeeper.ReadOnlyZKClient(139): Connect 0x287b0d17 to 127.0.0.1:51347 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:17:19,395 DEBUG [RS:2;jenkins-hbase4:42269] zookeeper.ReadOnlyZKClient(139): Connect 0x49cab2c7 to 127.0.0.1:51347 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:17:19,411 DEBUG [RS:0;jenkins-hbase4:45273] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7eed7252, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:17:19,411 DEBUG [RS:1;jenkins-hbase4:39037] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@26bce668, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:17:19,411 DEBUG [RS:0;jenkins-hbase4:45273] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3fd07869, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-11 18:17:19,412 DEBUG [RS:2;jenkins-hbase4:42269] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4332d9b7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:17:19,412 DEBUG [RS:1;jenkins-hbase4:39037] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6781ec53, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-11 18:17:19,412 DEBUG [RS:2;jenkins-hbase4:42269] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6305fb5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-11 18:17:19,422 DEBUG [RS:0;jenkins-hbase4:45273] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:45273 2023-07-11 18:17:19,422 DEBUG [RS:1;jenkins-hbase4:39037] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:39037 2023-07-11 18:17:19,422 INFO [RS:0;jenkins-hbase4:45273] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 18:17:19,422 INFO [RS:0;jenkins-hbase4:45273] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 18:17:19,422 INFO [RS:1;jenkins-hbase4:39037] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 18:17:19,422 INFO [RS:1;jenkins-hbase4:39037] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 18:17:19,422 DEBUG [RS:0;jenkins-hbase4:45273] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 18:17:19,422 DEBUG [RS:1;jenkins-hbase4:39037] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 18:17:19,423 INFO [RS:0;jenkins-hbase4:45273] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40609,1689099437960 with isa=jenkins-hbase4.apache.org/172.31.14.131:45273, startcode=1689099438153 2023-07-11 18:17:19,423 INFO [RS:1;jenkins-hbase4:39037] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40609,1689099437960 with isa=jenkins-hbase4.apache.org/172.31.14.131:39037, startcode=1689099438320 2023-07-11 18:17:19,423 DEBUG [RS:0;jenkins-hbase4:45273] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 18:17:19,423 DEBUG [RS:1;jenkins-hbase4:39037] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 18:17:19,425 DEBUG [RS:2;jenkins-hbase4:42269] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:42269 2023-07-11 18:17:19,425 INFO [RS:2;jenkins-hbase4:42269] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 18:17:19,425 INFO [RS:2;jenkins-hbase4:42269] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 18:17:19,425 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35203, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 18:17:19,425 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50169, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 18:17:19,425 DEBUG [RS:2;jenkins-hbase4:42269] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 18:17:19,426 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40609] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:19,426 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40609,1689099437960] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 18:17:19,427 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40609,1689099437960] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-11 18:17:19,427 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40609] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45273,1689099438153 2023-07-11 18:17:19,427 INFO [RS:2;jenkins-hbase4:42269] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40609,1689099437960 with isa=jenkins-hbase4.apache.org/172.31.14.131:42269, startcode=1689099438497 2023-07-11 18:17:19,427 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40609,1689099437960] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 18:17:19,427 DEBUG [RS:1;jenkins-hbase4:39037] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c 2023-07-11 18:17:19,427 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40609,1689099437960] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-11 18:17:19,427 DEBUG [RS:0;jenkins-hbase4:45273] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c 2023-07-11 18:17:19,427 DEBUG [RS:0;jenkins-hbase4:45273] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33083 2023-07-11 18:17:19,427 DEBUG [RS:0;jenkins-hbase4:45273] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35359 2023-07-11 18:17:19,427 DEBUG [RS:2;jenkins-hbase4:42269] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 18:17:19,427 DEBUG [RS:1;jenkins-hbase4:39037] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33083 2023-07-11 18:17:19,428 DEBUG [RS:1;jenkins-hbase4:39037] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35359 2023-07-11 18:17:19,429 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39897, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 18:17:19,429 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40609] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42269,1689099438497 2023-07-11 18:17:19,429 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40609,1689099437960] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 18:17:19,429 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40609,1689099437960] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-11 18:17:19,430 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:19,430 DEBUG [RS:2;jenkins-hbase4:42269] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c 2023-07-11 18:17:19,430 DEBUG [RS:2;jenkins-hbase4:42269] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:33083 2023-07-11 18:17:19,430 DEBUG [RS:2;jenkins-hbase4:42269] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=35359 2023-07-11 18:17:19,433 DEBUG [RS:0;jenkins-hbase4:45273] zookeeper.ZKUtil(162): regionserver:45273-0x101559a8a700001, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45273,1689099438153 2023-07-11 18:17:19,433 WARN [RS:0;jenkins-hbase4:45273] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 18:17:19,434 INFO [RS:0;jenkins-hbase4:45273] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 18:17:19,434 DEBUG [RS:0;jenkins-hbase4:45273] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/WALs/jenkins-hbase4.apache.org,45273,1689099438153 2023-07-11 18:17:19,434 DEBUG [RS:1;jenkins-hbase4:39037] zookeeper.ZKUtil(162): regionserver:39037-0x101559a8a700002, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:19,434 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39037,1689099438320] 2023-07-11 18:17:19,434 WARN [RS:1;jenkins-hbase4:39037] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 18:17:19,435 DEBUG [RS:2;jenkins-hbase4:42269] zookeeper.ZKUtil(162): regionserver:42269-0x101559a8a700003, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42269,1689099438497 2023-07-11 18:17:19,434 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42269,1689099438497] 2023-07-11 18:17:19,435 WARN [RS:2;jenkins-hbase4:42269] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 18:17:19,435 INFO [RS:1;jenkins-hbase4:39037] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 18:17:19,436 INFO [RS:2;jenkins-hbase4:42269] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 18:17:19,436 DEBUG [RS:1;jenkins-hbase4:39037] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/WALs/jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:19,435 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45273,1689099438153] 2023-07-11 18:17:19,436 DEBUG [RS:2;jenkins-hbase4:42269] regionserver.HRegionServer(1948): logDir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/WALs/jenkins-hbase4.apache.org,42269,1689099438497 2023-07-11 18:17:19,443 DEBUG [RS:0;jenkins-hbase4:45273] zookeeper.ZKUtil(162): regionserver:45273-0x101559a8a700001, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:19,444 DEBUG [RS:0;jenkins-hbase4:45273] zookeeper.ZKUtil(162): regionserver:45273-0x101559a8a700001, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45273,1689099438153 2023-07-11 18:17:19,444 DEBUG [RS:1;jenkins-hbase4:39037] zookeeper.ZKUtil(162): regionserver:39037-0x101559a8a700002, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:19,444 DEBUG [RS:2;jenkins-hbase4:42269] zookeeper.ZKUtil(162): regionserver:42269-0x101559a8a700003, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:19,444 DEBUG [RS:0;jenkins-hbase4:45273] zookeeper.ZKUtil(162): regionserver:45273-0x101559a8a700001, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42269,1689099438497 2023-07-11 18:17:19,444 DEBUG [RS:1;jenkins-hbase4:39037] zookeeper.ZKUtil(162): regionserver:39037-0x101559a8a700002, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45273,1689099438153 2023-07-11 18:17:19,444 DEBUG [RS:2;jenkins-hbase4:42269] zookeeper.ZKUtil(162): regionserver:42269-0x101559a8a700003, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45273,1689099438153 2023-07-11 18:17:19,445 DEBUG [RS:1;jenkins-hbase4:39037] zookeeper.ZKUtil(162): regionserver:39037-0x101559a8a700002, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42269,1689099438497 2023-07-11 18:17:19,445 DEBUG [RS:2;jenkins-hbase4:42269] zookeeper.ZKUtil(162): regionserver:42269-0x101559a8a700003, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42269,1689099438497 2023-07-11 18:17:19,445 DEBUG [RS:0;jenkins-hbase4:45273] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 18:17:19,445 INFO [RS:0;jenkins-hbase4:45273] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 18:17:19,445 DEBUG [RS:1;jenkins-hbase4:39037] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 18:17:19,446 DEBUG [RS:2;jenkins-hbase4:42269] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 18:17:19,447 INFO [RS:2;jenkins-hbase4:42269] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 18:17:19,447 INFO [RS:1;jenkins-hbase4:39037] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 18:17:19,447 INFO [RS:0;jenkins-hbase4:45273] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 18:17:19,447 INFO [RS:0;jenkins-hbase4:45273] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 18:17:19,447 INFO [RS:0;jenkins-hbase4:45273] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,450 INFO [RS:0;jenkins-hbase4:45273] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 18:17:19,451 INFO [RS:1;jenkins-hbase4:39037] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 18:17:19,452 INFO [RS:2;jenkins-hbase4:42269] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 18:17:19,455 INFO [RS:1;jenkins-hbase4:39037] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 18:17:19,455 INFO [RS:1;jenkins-hbase4:39037] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,455 INFO [RS:0;jenkins-hbase4:45273] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,455 INFO [RS:2;jenkins-hbase4:42269] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 18:17:19,455 DEBUG [RS:0;jenkins-hbase4:45273] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,455 INFO [RS:2;jenkins-hbase4:42269] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,455 INFO [RS:1;jenkins-hbase4:39037] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 18:17:19,456 DEBUG [RS:0;jenkins-hbase4:45273] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,456 DEBUG [RS:0;jenkins-hbase4:45273] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,456 DEBUG [RS:0;jenkins-hbase4:45273] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,456 DEBUG [RS:0;jenkins-hbase4:45273] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,456 DEBUG [RS:0;jenkins-hbase4:45273] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-11 18:17:19,456 DEBUG [RS:0;jenkins-hbase4:45273] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,456 DEBUG [RS:0;jenkins-hbase4:45273] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,456 DEBUG [RS:0;jenkins-hbase4:45273] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,456 DEBUG [RS:0;jenkins-hbase4:45273] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,458 INFO [RS:2;jenkins-hbase4:42269] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 18:17:19,460 INFO [RS:0;jenkins-hbase4:45273] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,460 INFO [RS:0;jenkins-hbase4:45273] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,460 INFO [RS:0;jenkins-hbase4:45273] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,460 INFO [RS:0;jenkins-hbase4:45273] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,460 INFO [RS:2;jenkins-hbase4:42269] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,460 INFO [RS:1;jenkins-hbase4:39037] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,460 DEBUG [RS:2;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,460 DEBUG [RS:1;jenkins-hbase4:39037] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,461 DEBUG [RS:2;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,461 DEBUG [RS:1;jenkins-hbase4:39037] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,461 DEBUG [RS:2;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,461 DEBUG [RS:1;jenkins-hbase4:39037] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,461 DEBUG [RS:2;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,461 DEBUG [RS:1;jenkins-hbase4:39037] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,461 DEBUG [RS:2;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,461 DEBUG [RS:1;jenkins-hbase4:39037] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,461 DEBUG [RS:2;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-11 18:17:19,461 DEBUG [RS:1;jenkins-hbase4:39037] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-11 18:17:19,461 DEBUG [RS:2;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,461 DEBUG [RS:1;jenkins-hbase4:39037] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,461 DEBUG [RS:2;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,461 DEBUG [RS:1;jenkins-hbase4:39037] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,461 DEBUG [RS:2;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,461 DEBUG [RS:1;jenkins-hbase4:39037] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,461 DEBUG [RS:2;jenkins-hbase4:42269] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,461 DEBUG [RS:1;jenkins-hbase4:39037] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:19,468 INFO [RS:2;jenkins-hbase4:42269] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,468 INFO [RS:2;jenkins-hbase4:42269] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,469 INFO [RS:2;jenkins-hbase4:42269] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,469 INFO [RS:2;jenkins-hbase4:42269] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,469 INFO [RS:1;jenkins-hbase4:39037] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,469 INFO [RS:1;jenkins-hbase4:39037] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,469 INFO [RS:1;jenkins-hbase4:39037] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,469 INFO [RS:1;jenkins-hbase4:39037] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,473 INFO [RS:0;jenkins-hbase4:45273] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 18:17:19,473 INFO [RS:0;jenkins-hbase4:45273] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45273,1689099438153-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,481 INFO [RS:2;jenkins-hbase4:42269] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 18:17:19,481 INFO [RS:2;jenkins-hbase4:42269] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42269,1689099438497-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,482 INFO [RS:1;jenkins-hbase4:39037] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 18:17:19,482 INFO [RS:1;jenkins-hbase4:39037] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39037,1689099438320-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,490 INFO [RS:0;jenkins-hbase4:45273] regionserver.Replication(203): jenkins-hbase4.apache.org,45273,1689099438153 started 2023-07-11 18:17:19,491 INFO [RS:0;jenkins-hbase4:45273] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45273,1689099438153, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45273, sessionid=0x101559a8a700001 2023-07-11 18:17:19,493 DEBUG [RS:0;jenkins-hbase4:45273] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 18:17:19,493 DEBUG [RS:0;jenkins-hbase4:45273] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45273,1689099438153 2023-07-11 18:17:19,493 DEBUG [RS:0;jenkins-hbase4:45273] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45273,1689099438153' 2023-07-11 18:17:19,493 DEBUG [RS:0;jenkins-hbase4:45273] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 18:17:19,493 DEBUG [RS:0;jenkins-hbase4:45273] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 18:17:19,494 DEBUG [RS:0;jenkins-hbase4:45273] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 18:17:19,494 DEBUG [RS:0;jenkins-hbase4:45273] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 18:17:19,494 DEBUG [RS:0;jenkins-hbase4:45273] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45273,1689099438153 2023-07-11 18:17:19,494 DEBUG [RS:0;jenkins-hbase4:45273] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45273,1689099438153' 2023-07-11 18:17:19,494 DEBUG [RS:0;jenkins-hbase4:45273] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 18:17:19,494 INFO [RS:2;jenkins-hbase4:42269] regionserver.Replication(203): jenkins-hbase4.apache.org,42269,1689099438497 started 2023-07-11 18:17:19,494 INFO [RS:2;jenkins-hbase4:42269] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42269,1689099438497, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42269, sessionid=0x101559a8a700003 2023-07-11 18:17:19,494 DEBUG [RS:0;jenkins-hbase4:45273] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 18:17:19,494 DEBUG [RS:2;jenkins-hbase4:42269] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 18:17:19,495 DEBUG [RS:2;jenkins-hbase4:42269] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42269,1689099438497 2023-07-11 18:17:19,495 DEBUG [RS:2;jenkins-hbase4:42269] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42269,1689099438497' 2023-07-11 18:17:19,495 DEBUG [RS:2;jenkins-hbase4:42269] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 18:17:19,495 DEBUG [RS:0;jenkins-hbase4:45273] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 18:17:19,495 INFO [RS:0;jenkins-hbase4:45273] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-11 18:17:19,495 DEBUG [RS:2;jenkins-hbase4:42269] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 18:17:19,496 INFO [RS:1;jenkins-hbase4:39037] regionserver.Replication(203): jenkins-hbase4.apache.org,39037,1689099438320 started 2023-07-11 18:17:19,496 INFO [RS:1;jenkins-hbase4:39037] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39037,1689099438320, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39037, sessionid=0x101559a8a700002 2023-07-11 18:17:19,496 DEBUG [RS:1;jenkins-hbase4:39037] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 18:17:19,496 DEBUG [RS:2;jenkins-hbase4:42269] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 18:17:19,496 DEBUG [RS:2;jenkins-hbase4:42269] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 18:17:19,496 DEBUG [RS:1;jenkins-hbase4:39037] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:19,496 DEBUG [RS:1;jenkins-hbase4:39037] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39037,1689099438320' 2023-07-11 18:17:19,496 DEBUG [RS:2;jenkins-hbase4:42269] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42269,1689099438497 2023-07-11 18:17:19,496 DEBUG [RS:2;jenkins-hbase4:42269] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42269,1689099438497' 2023-07-11 18:17:19,497 DEBUG [RS:2;jenkins-hbase4:42269] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 18:17:19,496 DEBUG [RS:1;jenkins-hbase4:39037] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 18:17:19,497 DEBUG [RS:1;jenkins-hbase4:39037] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 18:17:19,497 DEBUG [RS:2;jenkins-hbase4:42269] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 18:17:19,498 INFO [RS:0;jenkins-hbase4:45273] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,504 DEBUG [RS:2;jenkins-hbase4:42269] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 18:17:19,504 DEBUG [RS:1;jenkins-hbase4:39037] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 18:17:19,504 DEBUG [RS:0;jenkins-hbase4:45273] zookeeper.ZKUtil(398): regionserver:45273-0x101559a8a700001, quorum=127.0.0.1:51347, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-11 18:17:19,504 INFO [RS:2;jenkins-hbase4:42269] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-11 18:17:19,505 INFO [RS:0;jenkins-hbase4:45273] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-11 18:17:19,505 INFO [RS:2;jenkins-hbase4:42269] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,504 DEBUG [RS:1;jenkins-hbase4:39037] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 18:17:19,505 DEBUG [RS:1;jenkins-hbase4:39037] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:19,505 DEBUG [RS:1;jenkins-hbase4:39037] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39037,1689099438320' 2023-07-11 18:17:19,505 DEBUG [RS:1;jenkins-hbase4:39037] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 18:17:19,505 DEBUG [RS:2;jenkins-hbase4:42269] zookeeper.ZKUtil(398): regionserver:42269-0x101559a8a700003, quorum=127.0.0.1:51347, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-11 18:17:19,505 INFO [RS:2;jenkins-hbase4:42269] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-11 18:17:19,505 DEBUG [RS:1;jenkins-hbase4:39037] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 18:17:19,505 INFO [RS:2;jenkins-hbase4:42269] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,505 INFO [RS:0;jenkins-hbase4:45273] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,505 DEBUG [RS:1;jenkins-hbase4:39037] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 18:17:19,506 INFO [RS:2;jenkins-hbase4:42269] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,506 INFO [RS:0;jenkins-hbase4:45273] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,506 INFO [RS:1;jenkins-hbase4:39037] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-11 18:17:19,506 INFO [RS:1;jenkins-hbase4:39037] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,506 DEBUG [RS:1;jenkins-hbase4:39037] zookeeper.ZKUtil(398): regionserver:39037-0x101559a8a700002, quorum=127.0.0.1:51347, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-11 18:17:19,506 INFO [RS:1;jenkins-hbase4:39037] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-11 18:17:19,506 INFO [RS:1;jenkins-hbase4:39037] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,506 INFO [RS:1;jenkins-hbase4:39037] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:19,611 INFO [RS:0;jenkins-hbase4:45273] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45273%2C1689099438153, suffix=, logDir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/WALs/jenkins-hbase4.apache.org,45273,1689099438153, archiveDir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/oldWALs, maxLogs=32 2023-07-11 18:17:19,611 INFO [RS:2;jenkins-hbase4:42269] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42269%2C1689099438497, suffix=, logDir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/WALs/jenkins-hbase4.apache.org,42269,1689099438497, archiveDir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/oldWALs, maxLogs=32 2023-07-11 18:17:19,611 INFO [RS:1;jenkins-hbase4:39037] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39037%2C1689099438320, suffix=, logDir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/WALs/jenkins-hbase4.apache.org,39037,1689099438320, archiveDir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/oldWALs, maxLogs=32 2023-07-11 18:17:19,637 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33959,DS-f9808bf6-6b71-4aae-a0f3-a17c8f8be178,DISK] 2023-07-11 18:17:19,640 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46605,DS-363be66f-c727-42df-80da-16ce57120d9c,DISK] 2023-07-11 18:17:19,640 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46655,DS-2a2e16c4-b302-4bf8-8149-5567b0ab4f53,DISK] 2023-07-11 18:17:19,641 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46655,DS-2a2e16c4-b302-4bf8-8149-5567b0ab4f53,DISK] 2023-07-11 18:17:19,641 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33959,DS-f9808bf6-6b71-4aae-a0f3-a17c8f8be178,DISK] 2023-07-11 18:17:19,641 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46605,DS-363be66f-c727-42df-80da-16ce57120d9c,DISK] 2023-07-11 18:17:19,649 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46605,DS-363be66f-c727-42df-80da-16ce57120d9c,DISK] 2023-07-11 18:17:19,649 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33959,DS-f9808bf6-6b71-4aae-a0f3-a17c8f8be178,DISK] 2023-07-11 18:17:19,649 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46655,DS-2a2e16c4-b302-4bf8-8149-5567b0ab4f53,DISK] 2023-07-11 18:17:19,652 INFO [RS:2;jenkins-hbase4:42269] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/WALs/jenkins-hbase4.apache.org,42269,1689099438497/jenkins-hbase4.apache.org%2C42269%2C1689099438497.1689099439617 2023-07-11 18:17:19,652 INFO [RS:0;jenkins-hbase4:45273] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/WALs/jenkins-hbase4.apache.org,45273,1689099438153/jenkins-hbase4.apache.org%2C45273%2C1689099438153.1689099439613 2023-07-11 18:17:19,656 DEBUG [RS:0;jenkins-hbase4:45273] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46655,DS-2a2e16c4-b302-4bf8-8149-5567b0ab4f53,DISK], DatanodeInfoWithStorage[127.0.0.1:33959,DS-f9808bf6-6b71-4aae-a0f3-a17c8f8be178,DISK], DatanodeInfoWithStorage[127.0.0.1:46605,DS-363be66f-c727-42df-80da-16ce57120d9c,DISK]] 2023-07-11 18:17:19,656 DEBUG [RS:2;jenkins-hbase4:42269] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46605,DS-363be66f-c727-42df-80da-16ce57120d9c,DISK], DatanodeInfoWithStorage[127.0.0.1:46655,DS-2a2e16c4-b302-4bf8-8149-5567b0ab4f53,DISK], DatanodeInfoWithStorage[127.0.0.1:33959,DS-f9808bf6-6b71-4aae-a0f3-a17c8f8be178,DISK]] 2023-07-11 18:17:19,656 INFO [RS:1;jenkins-hbase4:39037] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/WALs/jenkins-hbase4.apache.org,39037,1689099438320/jenkins-hbase4.apache.org%2C39037%2C1689099438320.1689099439618 2023-07-11 18:17:19,658 DEBUG [RS:1;jenkins-hbase4:39037] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33959,DS-f9808bf6-6b71-4aae-a0f3-a17c8f8be178,DISK], DatanodeInfoWithStorage[127.0.0.1:46655,DS-2a2e16c4-b302-4bf8-8149-5567b0ab4f53,DISK], DatanodeInfoWithStorage[127.0.0.1:46605,DS-363be66f-c727-42df-80da-16ce57120d9c,DISK]] 2023-07-11 18:17:19,818 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:19,819 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-11 18:17:19,821 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/info 2023-07-11 18:17:19,821 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-11 18:17:19,822 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:19,822 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-11 18:17:19,823 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/rep_barrier 2023-07-11 18:17:19,824 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-11 18:17:19,824 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:19,825 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-11 18:17:19,826 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/table 2023-07-11 18:17:19,826 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-11 18:17:19,827 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:19,827 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740 2023-07-11 18:17:19,828 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740 2023-07-11 18:17:19,830 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-11 18:17:19,832 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-11 18:17:19,834 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:19,834 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11194577440, jitterRate=0.04257626831531525}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-11 18:17:19,834 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-11 18:17:19,834 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-11 18:17:19,834 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-11 18:17:19,834 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-11 18:17:19,834 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-11 18:17:19,835 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-11 18:17:19,835 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-11 18:17:19,835 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-11 18:17:19,836 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-11 18:17:19,836 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-11 18:17:19,836 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-11 18:17:19,838 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-11 18:17:19,839 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-11 18:17:19,989 DEBUG [jenkins-hbase4:40609] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-11 18:17:19,990 DEBUG [jenkins-hbase4:40609] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:17:19,990 DEBUG [jenkins-hbase4:40609] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:17:19,990 DEBUG [jenkins-hbase4:40609] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:17:19,990 DEBUG [jenkins-hbase4:40609] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:17:19,990 DEBUG [jenkins-hbase4:40609] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:17:19,991 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45273,1689099438153, state=OPENING 2023-07-11 18:17:19,994 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-11 18:17:19,996 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:19,996 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45273,1689099438153}] 2023-07-11 18:17:19,996 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 18:17:20,149 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45273,1689099438153 2023-07-11 18:17:20,149 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 18:17:20,151 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57288, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 18:17:20,157 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-11 18:17:20,158 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 18:17:20,160 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45273%2C1689099438153.meta, suffix=.meta, logDir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/WALs/jenkins-hbase4.apache.org,45273,1689099438153, archiveDir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/oldWALs, maxLogs=32 2023-07-11 18:17:20,235 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46605,DS-363be66f-c727-42df-80da-16ce57120d9c,DISK] 2023-07-11 18:17:20,235 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46655,DS-2a2e16c4-b302-4bf8-8149-5567b0ab4f53,DISK] 2023-07-11 18:17:20,235 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33959,DS-f9808bf6-6b71-4aae-a0f3-a17c8f8be178,DISK] 2023-07-11 18:17:20,237 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/WALs/jenkins-hbase4.apache.org,45273,1689099438153/jenkins-hbase4.apache.org%2C45273%2C1689099438153.meta.1689099440160.meta 2023-07-11 18:17:20,237 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46605,DS-363be66f-c727-42df-80da-16ce57120d9c,DISK], DatanodeInfoWithStorage[127.0.0.1:46655,DS-2a2e16c4-b302-4bf8-8149-5567b0ab4f53,DISK], DatanodeInfoWithStorage[127.0.0.1:33959,DS-f9808bf6-6b71-4aae-a0f3-a17c8f8be178,DISK]] 2023-07-11 18:17:20,237 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:20,237 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-11 18:17:20,238 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-11 18:17:20,238 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-11 18:17:20,238 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-11 18:17:20,238 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:20,238 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-11 18:17:20,238 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-11 18:17:20,242 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-11 18:17:20,243 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/info 2023-07-11 18:17:20,243 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/info 2023-07-11 18:17:20,244 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-11 18:17:20,244 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:20,244 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-11 18:17:20,245 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/rep_barrier 2023-07-11 18:17:20,245 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/rep_barrier 2023-07-11 18:17:20,246 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-11 18:17:20,246 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:20,246 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-11 18:17:20,247 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/table 2023-07-11 18:17:20,247 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/table 2023-07-11 18:17:20,247 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-11 18:17:20,248 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:20,248 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740 2023-07-11 18:17:20,250 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740 2023-07-11 18:17:20,252 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-11 18:17:20,253 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-11 18:17:20,254 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11693003360, jitterRate=0.0889957994222641}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-11 18:17:20,254 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-11 18:17:20,255 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689099440149 2023-07-11 18:17:20,260 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-11 18:17:20,260 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-11 18:17:20,261 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45273,1689099438153, state=OPEN 2023-07-11 18:17:20,262 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-11 18:17:20,262 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 18:17:20,264 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-11 18:17:20,264 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45273,1689099438153 in 266 msec 2023-07-11 18:17:20,265 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-11 18:17:20,265 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 428 msec 2023-07-11 18:17:20,268 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 968 msec 2023-07-11 18:17:20,268 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689099440268, completionTime=-1 2023-07-11 18:17:20,268 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-11 18:17:20,268 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-11 18:17:20,271 DEBUG [hconnection-0xd4dd3b-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 18:17:20,273 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57302, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 18:17:20,274 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-11 18:17:20,274 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689099500274 2023-07-11 18:17:20,274 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689099560274 2023-07-11 18:17:20,274 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-07-11 18:17:20,280 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40609,1689099437960-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:20,280 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40609,1689099437960-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:20,280 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40609,1689099437960-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:20,280 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:40609, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:20,280 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:20,280 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-11 18:17:20,280 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-11 18:17:20,281 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-11 18:17:20,281 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-11 18:17:20,283 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 18:17:20,284 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 18:17:20,285 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/hbase/namespace/d3f4beb004c0ee8df7011e5e153eae88 2023-07-11 18:17:20,286 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/hbase/namespace/d3f4beb004c0ee8df7011e5e153eae88 empty. 2023-07-11 18:17:20,287 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/hbase/namespace/d3f4beb004c0ee8df7011e5e153eae88 2023-07-11 18:17:20,287 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-11 18:17:20,302 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-11 18:17:20,303 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => d3f4beb004c0ee8df7011e5e153eae88, NAME => 'hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp 2023-07-11 18:17:20,312 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:20,312 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing d3f4beb004c0ee8df7011e5e153eae88, disabling compactions & flushes 2023-07-11 18:17:20,312 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88. 2023-07-11 18:17:20,312 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88. 2023-07-11 18:17:20,312 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88. after waiting 0 ms 2023-07-11 18:17:20,312 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88. 2023-07-11 18:17:20,312 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88. 2023-07-11 18:17:20,312 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for d3f4beb004c0ee8df7011e5e153eae88: 2023-07-11 18:17:20,314 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 18:17:20,315 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689099440315"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099440315"}]},"ts":"1689099440315"} 2023-07-11 18:17:20,317 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 18:17:20,318 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 18:17:20,318 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099440318"}]},"ts":"1689099440318"} 2023-07-11 18:17:20,319 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-11 18:17:20,322 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:17:20,322 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:17:20,322 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:17:20,322 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:17:20,322 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:17:20,322 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=d3f4beb004c0ee8df7011e5e153eae88, ASSIGN}] 2023-07-11 18:17:20,326 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=d3f4beb004c0ee8df7011e5e153eae88, ASSIGN 2023-07-11 18:17:20,327 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=d3f4beb004c0ee8df7011e5e153eae88, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39037,1689099438320; forceNewPlan=false, retain=false 2023-07-11 18:17:20,418 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40609,1689099437960] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 18:17:20,420 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40609,1689099437960] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-11 18:17:20,422 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 18:17:20,422 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 18:17:20,424 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/hbase/rsgroup/bebe08844f5dda37e3ab23f14ae677f7 2023-07-11 18:17:20,424 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/hbase/rsgroup/bebe08844f5dda37e3ab23f14ae677f7 empty. 2023-07-11 18:17:20,425 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/hbase/rsgroup/bebe08844f5dda37e3ab23f14ae677f7 2023-07-11 18:17:20,425 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-11 18:17:20,437 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-11 18:17:20,439 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => bebe08844f5dda37e3ab23f14ae677f7, NAME => 'hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp 2023-07-11 18:17:20,448 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:20,448 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing bebe08844f5dda37e3ab23f14ae677f7, disabling compactions & flushes 2023-07-11 18:17:20,448 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7. 2023-07-11 18:17:20,448 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7. 2023-07-11 18:17:20,448 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7. after waiting 0 ms 2023-07-11 18:17:20,448 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7. 2023-07-11 18:17:20,448 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7. 2023-07-11 18:17:20,448 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for bebe08844f5dda37e3ab23f14ae677f7: 2023-07-11 18:17:20,451 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 18:17:20,452 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689099440451"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099440451"}]},"ts":"1689099440451"} 2023-07-11 18:17:20,453 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 18:17:20,454 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 18:17:20,454 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099440454"}]},"ts":"1689099440454"} 2023-07-11 18:17:20,455 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-11 18:17:20,459 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:17:20,459 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:17:20,459 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:17:20,459 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:17:20,459 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:17:20,460 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=bebe08844f5dda37e3ab23f14ae677f7, ASSIGN}] 2023-07-11 18:17:20,460 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=bebe08844f5dda37e3ab23f14ae677f7, ASSIGN 2023-07-11 18:17:20,461 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=bebe08844f5dda37e3ab23f14ae677f7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39037,1689099438320; forceNewPlan=false, retain=false 2023-07-11 18:17:20,461 INFO [jenkins-hbase4:40609] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-11 18:17:20,463 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=d3f4beb004c0ee8df7011e5e153eae88, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:20,463 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689099440463"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099440463"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099440463"}]},"ts":"1689099440463"} 2023-07-11 18:17:20,464 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=bebe08844f5dda37e3ab23f14ae677f7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:20,464 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689099440464"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099440464"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099440464"}]},"ts":"1689099440464"} 2023-07-11 18:17:20,465 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure d3f4beb004c0ee8df7011e5e153eae88, server=jenkins-hbase4.apache.org,39037,1689099438320}] 2023-07-11 18:17:20,465 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure bebe08844f5dda37e3ab23f14ae677f7, server=jenkins-hbase4.apache.org,39037,1689099438320}] 2023-07-11 18:17:20,623 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:20,623 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 18:17:20,624 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41218, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 18:17:20,630 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88. 2023-07-11 18:17:20,631 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => d3f4beb004c0ee8df7011e5e153eae88, NAME => 'hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:20,631 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace d3f4beb004c0ee8df7011e5e153eae88 2023-07-11 18:17:20,631 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:20,631 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for d3f4beb004c0ee8df7011e5e153eae88 2023-07-11 18:17:20,631 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for d3f4beb004c0ee8df7011e5e153eae88 2023-07-11 18:17:20,633 INFO [StoreOpener-d3f4beb004c0ee8df7011e5e153eae88-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region d3f4beb004c0ee8df7011e5e153eae88 2023-07-11 18:17:20,635 DEBUG [StoreOpener-d3f4beb004c0ee8df7011e5e153eae88-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/namespace/d3f4beb004c0ee8df7011e5e153eae88/info 2023-07-11 18:17:20,635 DEBUG [StoreOpener-d3f4beb004c0ee8df7011e5e153eae88-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/namespace/d3f4beb004c0ee8df7011e5e153eae88/info 2023-07-11 18:17:20,635 INFO [StoreOpener-d3f4beb004c0ee8df7011e5e153eae88-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region d3f4beb004c0ee8df7011e5e153eae88 columnFamilyName info 2023-07-11 18:17:20,636 INFO [StoreOpener-d3f4beb004c0ee8df7011e5e153eae88-1] regionserver.HStore(310): Store=d3f4beb004c0ee8df7011e5e153eae88/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:20,636 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/namespace/d3f4beb004c0ee8df7011e5e153eae88 2023-07-11 18:17:20,637 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/namespace/d3f4beb004c0ee8df7011e5e153eae88 2023-07-11 18:17:20,639 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for d3f4beb004c0ee8df7011e5e153eae88 2023-07-11 18:17:20,642 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/namespace/d3f4beb004c0ee8df7011e5e153eae88/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:20,642 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened d3f4beb004c0ee8df7011e5e153eae88; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9825763200, jitterRate=-0.0849044919013977}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:20,643 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for d3f4beb004c0ee8df7011e5e153eae88: 2023-07-11 18:17:20,643 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88., pid=8, masterSystemTime=1689099440623 2023-07-11 18:17:20,647 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88. 2023-07-11 18:17:20,647 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88. 2023-07-11 18:17:20,647 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7. 2023-07-11 18:17:20,647 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bebe08844f5dda37e3ab23f14ae677f7, NAME => 'hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:20,648 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=d3f4beb004c0ee8df7011e5e153eae88, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:20,648 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-11 18:17:20,648 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689099440647"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099440647"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099440647"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099440647"}]},"ts":"1689099440647"} 2023-07-11 18:17:20,648 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7. service=MultiRowMutationService 2023-07-11 18:17:20,648 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-11 18:17:20,648 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup bebe08844f5dda37e3ab23f14ae677f7 2023-07-11 18:17:20,648 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:20,648 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bebe08844f5dda37e3ab23f14ae677f7 2023-07-11 18:17:20,648 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bebe08844f5dda37e3ab23f14ae677f7 2023-07-11 18:17:20,650 INFO [StoreOpener-bebe08844f5dda37e3ab23f14ae677f7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region bebe08844f5dda37e3ab23f14ae677f7 2023-07-11 18:17:20,651 DEBUG [StoreOpener-bebe08844f5dda37e3ab23f14ae677f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/rsgroup/bebe08844f5dda37e3ab23f14ae677f7/m 2023-07-11 18:17:20,651 DEBUG [StoreOpener-bebe08844f5dda37e3ab23f14ae677f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/rsgroup/bebe08844f5dda37e3ab23f14ae677f7/m 2023-07-11 18:17:20,652 INFO [StoreOpener-bebe08844f5dda37e3ab23f14ae677f7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bebe08844f5dda37e3ab23f14ae677f7 columnFamilyName m 2023-07-11 18:17:20,652 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-11 18:17:20,652 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure d3f4beb004c0ee8df7011e5e153eae88, server=jenkins-hbase4.apache.org,39037,1689099438320 in 184 msec 2023-07-11 18:17:20,652 INFO [StoreOpener-bebe08844f5dda37e3ab23f14ae677f7-1] regionserver.HStore(310): Store=bebe08844f5dda37e3ab23f14ae677f7/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:20,653 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/rsgroup/bebe08844f5dda37e3ab23f14ae677f7 2023-07-11 18:17:20,654 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/rsgroup/bebe08844f5dda37e3ab23f14ae677f7 2023-07-11 18:17:20,654 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-11 18:17:20,654 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=d3f4beb004c0ee8df7011e5e153eae88, ASSIGN in 330 msec 2023-07-11 18:17:20,655 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 18:17:20,655 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099440655"}]},"ts":"1689099440655"} 2023-07-11 18:17:20,657 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-11 18:17:20,657 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bebe08844f5dda37e3ab23f14ae677f7 2023-07-11 18:17:20,659 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/rsgroup/bebe08844f5dda37e3ab23f14ae677f7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:20,660 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 18:17:20,660 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bebe08844f5dda37e3ab23f14ae677f7; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@4d9e281, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:20,660 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bebe08844f5dda37e3ab23f14ae677f7: 2023-07-11 18:17:20,661 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7., pid=9, masterSystemTime=1689099440623 2023-07-11 18:17:20,662 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 380 msec 2023-07-11 18:17:20,662 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7. 2023-07-11 18:17:20,662 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7. 2023-07-11 18:17:20,663 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=bebe08844f5dda37e3ab23f14ae677f7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:20,663 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689099440662"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099440662"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099440662"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099440662"}]},"ts":"1689099440662"} 2023-07-11 18:17:20,665 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-11 18:17:20,666 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure bebe08844f5dda37e3ab23f14ae677f7, server=jenkins-hbase4.apache.org,39037,1689099438320 in 199 msec 2023-07-11 18:17:20,667 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-11 18:17:20,667 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=bebe08844f5dda37e3ab23f14ae677f7, ASSIGN in 206 msec 2023-07-11 18:17:20,668 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 18:17:20,668 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099440668"}]},"ts":"1689099440668"} 2023-07-11 18:17:20,670 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-11 18:17:20,672 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 18:17:20,673 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 254 msec 2023-07-11 18:17:20,682 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-11 18:17:20,684 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-11 18:17:20,684 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:20,687 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 18:17:20,688 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41232, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 18:17:20,692 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-11 18:17:20,699 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 18:17:20,702 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 9 msec 2023-07-11 18:17:20,703 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-11 18:17:20,710 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 18:17:20,712 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 8 msec 2023-07-11 18:17:20,718 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-11 18:17:20,720 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-11 18:17:20,720 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 2.048sec 2023-07-11 18:17:20,721 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-11 18:17:20,721 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 18:17:20,722 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-11 18:17:20,722 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-11 18:17:20,724 DEBUG [Listener at localhost/45067] zookeeper.ReadOnlyZKClient(139): Connect 0x6b7167b7 to 127.0.0.1:51347 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:17:20,725 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 18:17:20,726 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40609,1689099437960] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-11 18:17:20,726 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40609,1689099437960] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-11 18:17:20,726 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 18:17:20,726 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-11 18:17:20,728 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/hbase/quota/94077be50b5646d492a0b264c0b8a769 2023-07-11 18:17:20,729 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/hbase/quota/94077be50b5646d492a0b264c0b8a769 empty. 2023-07-11 18:17:20,730 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/hbase/quota/94077be50b5646d492a0b264c0b8a769 2023-07-11 18:17:20,730 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-11 18:17:20,733 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-11 18:17:20,733 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-11 18:17:20,734 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:20,734 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40609,1689099437960] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:20,735 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40609,1689099437960] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-11 18:17:20,736 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:20,736 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:20,736 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-11 18:17:20,736 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-11 18:17:20,736 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40609,1689099437960-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-11 18:17:20,737 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40609,1689099437960-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-11 18:17:20,737 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40609,1689099437960] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-11 18:17:20,745 WARN [IPC Server handler 0 on default port 33083] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-07-11 18:17:20,745 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-11 18:17:20,745 WARN [IPC Server handler 0 on default port 33083] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-07-11 18:17:20,745 WARN [IPC Server handler 0 on default port 33083] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-07-11 18:17:20,746 DEBUG [Listener at localhost/45067] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c97e0aa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:17:20,749 DEBUG [hconnection-0x5f6f38a7-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 18:17:20,751 INFO [RS-EventLoopGroup-9-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35030, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 18:17:20,753 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,40609,1689099437960 2023-07-11 18:17:20,753 INFO [Listener at localhost/45067] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:20,756 DEBUG [Listener at localhost/45067] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-11 18:17:20,756 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-11 18:17:20,757 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 94077be50b5646d492a0b264c0b8a769, NAME => 'hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp 2023-07-11 18:17:20,757 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39786, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-11 18:17:20,764 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-11 18:17:20,764 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:20,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-11 18:17:20,765 DEBUG [Listener at localhost/45067] zookeeper.ReadOnlyZKClient(139): Connect 0x1796f8f0 to 127.0.0.1:51347 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:17:20,774 DEBUG [Listener at localhost/45067] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@311a50a5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:17:20,774 INFO [Listener at localhost/45067] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:51347 2023-07-11 18:17:20,777 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:20,777 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 94077be50b5646d492a0b264c0b8a769, disabling compactions & flushes 2023-07-11 18:17:20,777 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769. 2023-07-11 18:17:20,777 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769. 2023-07-11 18:17:20,777 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769. after waiting 0 ms 2023-07-11 18:17:20,777 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769. 2023-07-11 18:17:20,777 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769. 2023-07-11 18:17:20,777 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 94077be50b5646d492a0b264c0b8a769: 2023-07-11 18:17:20,780 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 18:17:20,781 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 18:17:20,782 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101559a8a70000a connected 2023-07-11 18:17:20,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-11 18:17:20,783 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689099440782"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099440782"}]},"ts":"1689099440782"} 2023-07-11 18:17:20,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-11 18:17:20,785 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 18:17:20,786 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 18:17:20,787 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099440787"}]},"ts":"1689099440787"} 2023-07-11 18:17:20,788 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-11 18:17:20,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.MasterRpcServices(1230): Checking to see if procedure is done pid=13 2023-07-11 18:17:20,794 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 18:17:20,797 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:17:20,797 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:17:20,797 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:17:20,797 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:17:20,797 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:17:20,798 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=94077be50b5646d492a0b264c0b8a769, ASSIGN}] 2023-07-11 18:17:20,799 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=94077be50b5646d492a0b264c0b8a769, ASSIGN 2023-07-11 18:17:20,799 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=14, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=94077be50b5646d492a0b264c0b8a769, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39037,1689099438320; forceNewPlan=false, retain=false 2023-07-11 18:17:20,800 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 16 msec 2023-07-11 18:17:20,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.MasterRpcServices(1230): Checking to see if procedure is done pid=13 2023-07-11 18:17:20,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 18:17:20,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-11 18:17:20,900 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 18:17:20,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-11 18:17:20,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-11 18:17:20,903 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:20,903 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-11 18:17:20,906 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 18:17:20,907 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/np1/table1/06da5d40e374ae32c53aaef8d43cd468 2023-07-11 18:17:20,908 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/np1/table1/06da5d40e374ae32c53aaef8d43cd468 empty. 2023-07-11 18:17:20,908 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/np1/table1/06da5d40e374ae32c53aaef8d43cd468 2023-07-11 18:17:20,908 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-11 18:17:20,922 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-11 18:17:20,926 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 06da5d40e374ae32c53aaef8d43cd468, NAME => 'np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp 2023-07-11 18:17:20,935 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:20,935 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 06da5d40e374ae32c53aaef8d43cd468, disabling compactions & flushes 2023-07-11 18:17:20,935 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468. 2023-07-11 18:17:20,935 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468. 2023-07-11 18:17:20,935 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468. after waiting 0 ms 2023-07-11 18:17:20,935 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468. 2023-07-11 18:17:20,935 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468. 2023-07-11 18:17:20,935 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 06da5d40e374ae32c53aaef8d43cd468: 2023-07-11 18:17:20,937 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 18:17:20,938 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689099440938"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099440938"}]},"ts":"1689099440938"} 2023-07-11 18:17:20,940 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 18:17:20,941 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 18:17:20,941 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099440941"}]},"ts":"1689099440941"} 2023-07-11 18:17:20,942 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-11 18:17:20,946 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:17:20,946 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:17:20,946 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:17:20,946 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:17:20,946 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:17:20,946 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=06da5d40e374ae32c53aaef8d43cd468, ASSIGN}] 2023-07-11 18:17:20,947 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=06da5d40e374ae32c53aaef8d43cd468, ASSIGN 2023-07-11 18:17:20,947 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=06da5d40e374ae32c53aaef8d43cd468, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39037,1689099438320; forceNewPlan=false, retain=false 2023-07-11 18:17:20,950 INFO [jenkins-hbase4:40609] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-11 18:17:20,951 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=94077be50b5646d492a0b264c0b8a769, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:20,951 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=06da5d40e374ae32c53aaef8d43cd468, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:20,952 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689099440951"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099440951"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099440951"}]},"ts":"1689099440951"} 2023-07-11 18:17:20,952 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689099440951"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099440951"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099440951"}]},"ts":"1689099440951"} 2023-07-11 18:17:20,953 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=14, state=RUNNABLE; OpenRegionProcedure 94077be50b5646d492a0b264c0b8a769, server=jenkins-hbase4.apache.org,39037,1689099438320}] 2023-07-11 18:17:20,954 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=16, state=RUNNABLE; OpenRegionProcedure 06da5d40e374ae32c53aaef8d43cd468, server=jenkins-hbase4.apache.org,39037,1689099438320}] 2023-07-11 18:17:21,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-11 18:17:21,108 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769. 2023-07-11 18:17:21,109 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 94077be50b5646d492a0b264c0b8a769, NAME => 'hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:21,109 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 94077be50b5646d492a0b264c0b8a769 2023-07-11 18:17:21,109 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:21,109 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 94077be50b5646d492a0b264c0b8a769 2023-07-11 18:17:21,109 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 94077be50b5646d492a0b264c0b8a769 2023-07-11 18:17:21,111 INFO [StoreOpener-94077be50b5646d492a0b264c0b8a769-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 94077be50b5646d492a0b264c0b8a769 2023-07-11 18:17:21,111 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468. 2023-07-11 18:17:21,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 06da5d40e374ae32c53aaef8d43cd468, NAME => 'np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:21,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 06da5d40e374ae32c53aaef8d43cd468 2023-07-11 18:17:21,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:21,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 06da5d40e374ae32c53aaef8d43cd468 2023-07-11 18:17:21,112 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 06da5d40e374ae32c53aaef8d43cd468 2023-07-11 18:17:21,112 DEBUG [StoreOpener-94077be50b5646d492a0b264c0b8a769-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/quota/94077be50b5646d492a0b264c0b8a769/q 2023-07-11 18:17:21,112 DEBUG [StoreOpener-94077be50b5646d492a0b264c0b8a769-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/quota/94077be50b5646d492a0b264c0b8a769/q 2023-07-11 18:17:21,113 INFO [StoreOpener-94077be50b5646d492a0b264c0b8a769-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 94077be50b5646d492a0b264c0b8a769 columnFamilyName q 2023-07-11 18:17:21,113 INFO [StoreOpener-06da5d40e374ae32c53aaef8d43cd468-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 06da5d40e374ae32c53aaef8d43cd468 2023-07-11 18:17:21,113 INFO [StoreOpener-94077be50b5646d492a0b264c0b8a769-1] regionserver.HStore(310): Store=94077be50b5646d492a0b264c0b8a769/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:21,113 INFO [StoreOpener-94077be50b5646d492a0b264c0b8a769-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 94077be50b5646d492a0b264c0b8a769 2023-07-11 18:17:21,115 DEBUG [StoreOpener-06da5d40e374ae32c53aaef8d43cd468-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/np1/table1/06da5d40e374ae32c53aaef8d43cd468/fam1 2023-07-11 18:17:21,115 DEBUG [StoreOpener-94077be50b5646d492a0b264c0b8a769-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/quota/94077be50b5646d492a0b264c0b8a769/u 2023-07-11 18:17:21,115 DEBUG [StoreOpener-94077be50b5646d492a0b264c0b8a769-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/quota/94077be50b5646d492a0b264c0b8a769/u 2023-07-11 18:17:21,115 DEBUG [StoreOpener-06da5d40e374ae32c53aaef8d43cd468-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/np1/table1/06da5d40e374ae32c53aaef8d43cd468/fam1 2023-07-11 18:17:21,115 INFO [StoreOpener-94077be50b5646d492a0b264c0b8a769-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 94077be50b5646d492a0b264c0b8a769 columnFamilyName u 2023-07-11 18:17:21,115 INFO [StoreOpener-06da5d40e374ae32c53aaef8d43cd468-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 06da5d40e374ae32c53aaef8d43cd468 columnFamilyName fam1 2023-07-11 18:17:21,116 INFO [StoreOpener-94077be50b5646d492a0b264c0b8a769-1] regionserver.HStore(310): Store=94077be50b5646d492a0b264c0b8a769/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:21,116 INFO [StoreOpener-06da5d40e374ae32c53aaef8d43cd468-1] regionserver.HStore(310): Store=06da5d40e374ae32c53aaef8d43cd468/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:21,117 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/quota/94077be50b5646d492a0b264c0b8a769 2023-07-11 18:17:21,117 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/quota/94077be50b5646d492a0b264c0b8a769 2023-07-11 18:17:21,119 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/np1/table1/06da5d40e374ae32c53aaef8d43cd468 2023-07-11 18:17:21,119 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/np1/table1/06da5d40e374ae32c53aaef8d43cd468 2023-07-11 18:17:21,119 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-11 18:17:21,121 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 94077be50b5646d492a0b264c0b8a769 2023-07-11 18:17:21,123 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 06da5d40e374ae32c53aaef8d43cd468 2023-07-11 18:17:21,124 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/quota/94077be50b5646d492a0b264c0b8a769/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:21,125 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 94077be50b5646d492a0b264c0b8a769; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12052499040, jitterRate=0.12247644364833832}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-11 18:17:21,125 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 94077be50b5646d492a0b264c0b8a769: 2023-07-11 18:17:21,126 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/np1/table1/06da5d40e374ae32c53aaef8d43cd468/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:21,126 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769., pid=17, masterSystemTime=1689099441105 2023-07-11 18:17:21,126 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 06da5d40e374ae32c53aaef8d43cd468; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11185504000, jitterRate=0.04173123836517334}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:21,126 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 06da5d40e374ae32c53aaef8d43cd468: 2023-07-11 18:17:21,127 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468., pid=18, masterSystemTime=1689099441105 2023-07-11 18:17:21,128 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769. 2023-07-11 18:17:21,128 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769. 2023-07-11 18:17:21,128 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=14 updating hbase:meta row=94077be50b5646d492a0b264c0b8a769, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:21,128 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689099441128"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099441128"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099441128"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099441128"}]},"ts":"1689099441128"} 2023-07-11 18:17:21,129 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468. 2023-07-11 18:17:21,129 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468. 2023-07-11 18:17:21,129 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=06da5d40e374ae32c53aaef8d43cd468, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:21,129 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689099441129"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099441129"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099441129"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099441129"}]},"ts":"1689099441129"} 2023-07-11 18:17:21,132 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=14 2023-07-11 18:17:21,132 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=14, state=SUCCESS; OpenRegionProcedure 94077be50b5646d492a0b264c0b8a769, server=jenkins-hbase4.apache.org,39037,1689099438320 in 177 msec 2023-07-11 18:17:21,132 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=16 2023-07-11 18:17:21,132 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=16, state=SUCCESS; OpenRegionProcedure 06da5d40e374ae32c53aaef8d43cd468, server=jenkins-hbase4.apache.org,39037,1689099438320 in 177 msec 2023-07-11 18:17:21,140 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-11 18:17:21,140 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=94077be50b5646d492a0b264c0b8a769, ASSIGN in 334 msec 2023-07-11 18:17:21,141 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 18:17:21,141 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099441141"}]},"ts":"1689099441141"} 2023-07-11 18:17:21,142 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=15 2023-07-11 18:17:21,142 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=06da5d40e374ae32c53aaef8d43cd468, ASSIGN in 186 msec 2023-07-11 18:17:21,143 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 18:17:21,143 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-11 18:17:21,143 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099441143"}]},"ts":"1689099441143"} 2023-07-11 18:17:21,145 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-11 18:17:21,146 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 18:17:21,147 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 425 msec 2023-07-11 18:17:21,147 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 18:17:21,149 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 250 msec 2023-07-11 18:17:21,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-11 18:17:21,206 INFO [Listener at localhost/45067] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-11 18:17:21,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 18:17:21,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-11 18:17:21,210 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 18:17:21,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-11 18:17:21,212 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-11 18:17:21,231 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=23 msec 2023-07-11 18:17:21,313 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-11 18:17:21,316 INFO [Listener at localhost/45067] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-11 18:17:21,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:21,317 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:21,318 INFO [Listener at localhost/45067] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-11 18:17:21,319 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-11 18:17:21,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-11 18:17:21,322 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-11 18:17:21,322 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099441322"}]},"ts":"1689099441322"} 2023-07-11 18:17:21,323 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-11 18:17:21,326 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-11 18:17:21,327 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=06da5d40e374ae32c53aaef8d43cd468, UNASSIGN}] 2023-07-11 18:17:21,328 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=06da5d40e374ae32c53aaef8d43cd468, UNASSIGN 2023-07-11 18:17:21,328 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=06da5d40e374ae32c53aaef8d43cd468, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:21,328 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689099441328"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099441328"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099441328"}]},"ts":"1689099441328"} 2023-07-11 18:17:21,329 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 06da5d40e374ae32c53aaef8d43cd468, server=jenkins-hbase4.apache.org,39037,1689099438320}] 2023-07-11 18:17:21,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-11 18:17:21,469 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-11 18:17:21,481 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 06da5d40e374ae32c53aaef8d43cd468 2023-07-11 18:17:21,482 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 06da5d40e374ae32c53aaef8d43cd468, disabling compactions & flushes 2023-07-11 18:17:21,482 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468. 2023-07-11 18:17:21,482 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468. 2023-07-11 18:17:21,482 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468. after waiting 0 ms 2023-07-11 18:17:21,482 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468. 2023-07-11 18:17:21,486 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/np1/table1/06da5d40e374ae32c53aaef8d43cd468/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:17:21,487 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468. 2023-07-11 18:17:21,487 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 06da5d40e374ae32c53aaef8d43cd468: 2023-07-11 18:17:21,491 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 06da5d40e374ae32c53aaef8d43cd468 2023-07-11 18:17:21,491 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=06da5d40e374ae32c53aaef8d43cd468, regionState=CLOSED 2023-07-11 18:17:21,493 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689099441491"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099441491"}]},"ts":"1689099441491"} 2023-07-11 18:17:21,502 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-11 18:17:21,502 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 06da5d40e374ae32c53aaef8d43cd468, server=jenkins-hbase4.apache.org,39037,1689099438320 in 167 msec 2023-07-11 18:17:21,504 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-11 18:17:21,504 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=06da5d40e374ae32c53aaef8d43cd468, UNASSIGN in 175 msec 2023-07-11 18:17:21,505 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099441505"}]},"ts":"1689099441505"} 2023-07-11 18:17:21,506 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-11 18:17:21,508 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-11 18:17:21,510 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 190 msec 2023-07-11 18:17:21,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-11 18:17:21,624 INFO [Listener at localhost/45067] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-11 18:17:21,625 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-11 18:17:21,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-11 18:17:21,628 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-11 18:17:21,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-11 18:17:21,629 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-11 18:17:21,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:21,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-11 18:17:21,632 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/np1/table1/06da5d40e374ae32c53aaef8d43cd468 2023-07-11 18:17:21,634 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-11 18:17:21,634 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/np1/table1/06da5d40e374ae32c53aaef8d43cd468/fam1, FileablePath, hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/np1/table1/06da5d40e374ae32c53aaef8d43cd468/recovered.edits] 2023-07-11 18:17:21,639 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/np1/table1/06da5d40e374ae32c53aaef8d43cd468/recovered.edits/4.seqid to hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/archive/data/np1/table1/06da5d40e374ae32c53aaef8d43cd468/recovered.edits/4.seqid 2023-07-11 18:17:21,639 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/.tmp/data/np1/table1/06da5d40e374ae32c53aaef8d43cd468 2023-07-11 18:17:21,639 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-11 18:17:21,641 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-11 18:17:21,643 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-11 18:17:21,645 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-11 18:17:21,645 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-11 18:17:21,646 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-11 18:17:21,646 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099441646"}]},"ts":"9223372036854775807"} 2023-07-11 18:17:21,647 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-11 18:17:21,647 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 06da5d40e374ae32c53aaef8d43cd468, NAME => 'np1:table1,,1689099440897.06da5d40e374ae32c53aaef8d43cd468.', STARTKEY => '', ENDKEY => ''}] 2023-07-11 18:17:21,647 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-11 18:17:21,647 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689099441647"}]},"ts":"9223372036854775807"} 2023-07-11 18:17:21,648 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-11 18:17:21,650 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-11 18:17:21,651 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 25 msec 2023-07-11 18:17:21,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-11 18:17:21,735 INFO [Listener at localhost/45067] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-11 18:17:21,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-11 18:17:21,747 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-11 18:17:21,749 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-11 18:17:21,752 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-11 18:17:21,754 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-11 18:17:21,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-11 18:17:21,756 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-11 18:17:21,756 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 18:17:21,757 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-11 18:17:21,758 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-11 18:17:21,759 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 17 msec 2023-07-11 18:17:21,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40609] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-11 18:17:21,856 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-11 18:17:21,856 INFO [Listener at localhost/45067] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-11 18:17:21,857 DEBUG [Listener at localhost/45067] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6b7167b7 to 127.0.0.1:51347 2023-07-11 18:17:21,857 DEBUG [Listener at localhost/45067] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:21,857 DEBUG [Listener at localhost/45067] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-11 18:17:21,857 DEBUG [Listener at localhost/45067] util.JVMClusterUtil(257): Found active master hash=51668313, stopped=false 2023-07-11 18:17:21,857 DEBUG [Listener at localhost/45067] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-11 18:17:21,857 DEBUG [Listener at localhost/45067] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-11 18:17:21,857 DEBUG [Listener at localhost/45067] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-11 18:17:21,857 INFO [Listener at localhost/45067] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,40609,1689099437960 2023-07-11 18:17:21,859 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:42269-0x101559a8a700003, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:21,859 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:39037-0x101559a8a700002, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:21,859 INFO [Listener at localhost/45067] procedure2.ProcedureExecutor(629): Stopping 2023-07-11 18:17:21,859 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:45273-0x101559a8a700001, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:21,859 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:21,860 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:21,861 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42269-0x101559a8a700003, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:21,861 DEBUG [Listener at localhost/45067] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0d1faf66 to 127.0.0.1:51347 2023-07-11 18:17:21,861 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:21,861 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39037-0x101559a8a700002, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:21,862 DEBUG [Listener at localhost/45067] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:21,862 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45273-0x101559a8a700001, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:21,862 INFO [Listener at localhost/45067] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45273,1689099438153' ***** 2023-07-11 18:17:21,863 INFO [Listener at localhost/45067] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 18:17:21,863 INFO [Listener at localhost/45067] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39037,1689099438320' ***** 2023-07-11 18:17:21,863 INFO [Listener at localhost/45067] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 18:17:21,863 INFO [RS:0;jenkins-hbase4:45273] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 18:17:21,863 INFO [Listener at localhost/45067] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42269,1689099438497' ***** 2023-07-11 18:17:21,863 INFO [RS:1;jenkins-hbase4:39037] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 18:17:21,864 INFO [Listener at localhost/45067] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 18:17:21,871 INFO [RS:2;jenkins-hbase4:42269] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 18:17:21,881 INFO [RS:0;jenkins-hbase4:45273] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@73f692b6{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:17:21,882 INFO [RS:1;jenkins-hbase4:39037] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@31ec606e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:17:21,883 INFO [RS:0;jenkins-hbase4:45273] server.AbstractConnector(383): Stopped ServerConnector@5230b41e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 18:17:21,883 INFO [RS:0;jenkins-hbase4:45273] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 18:17:21,883 INFO [RS:2;jenkins-hbase4:42269] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6a1b9d4{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:17:21,883 INFO [RS:1;jenkins-hbase4:39037] server.AbstractConnector(383): Stopped ServerConnector@f5d6833{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 18:17:21,883 INFO [RS:1;jenkins-hbase4:39037] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 18:17:21,883 INFO [RS:2;jenkins-hbase4:42269] server.AbstractConnector(383): Stopped ServerConnector@5c04004{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 18:17:21,883 INFO [RS:2;jenkins-hbase4:42269] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 18:17:21,886 INFO [RS:2;jenkins-hbase4:42269] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@65d00383{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 18:17:21,886 INFO [RS:1;jenkins-hbase4:39037] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@398f58e0{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 18:17:21,886 INFO [RS:0;jenkins-hbase4:45273] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@14ee43e5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 18:17:21,886 INFO [RS:2;jenkins-hbase4:42269] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2f728d8a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/hadoop.log.dir/,STOPPED} 2023-07-11 18:17:21,887 INFO [RS:0;jenkins-hbase4:45273] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@204cfa25{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/hadoop.log.dir/,STOPPED} 2023-07-11 18:17:21,887 INFO [RS:1;jenkins-hbase4:39037] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@321cd692{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/hadoop.log.dir/,STOPPED} 2023-07-11 18:17:21,887 INFO [RS:1;jenkins-hbase4:39037] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 18:17:21,887 INFO [RS:0;jenkins-hbase4:45273] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 18:17:21,887 INFO [RS:1;jenkins-hbase4:39037] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 18:17:21,887 INFO [RS:0;jenkins-hbase4:45273] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 18:17:21,888 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 18:17:21,888 INFO [RS:0;jenkins-hbase4:45273] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 18:17:21,887 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 18:17:21,888 INFO [RS:0;jenkins-hbase4:45273] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45273,1689099438153 2023-07-11 18:17:21,890 INFO [RS:2;jenkins-hbase4:42269] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 18:17:21,887 INFO [RS:1;jenkins-hbase4:39037] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 18:17:21,890 INFO [RS:2;jenkins-hbase4:42269] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 18:17:21,890 INFO [RS:1;jenkins-hbase4:39037] regionserver.HRegionServer(3305): Received CLOSE for bebe08844f5dda37e3ab23f14ae677f7 2023-07-11 18:17:21,890 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 18:17:21,890 INFO [RS:1;jenkins-hbase4:39037] regionserver.HRegionServer(3305): Received CLOSE for 94077be50b5646d492a0b264c0b8a769 2023-07-11 18:17:21,891 INFO [RS:1;jenkins-hbase4:39037] regionserver.HRegionServer(3305): Received CLOSE for d3f4beb004c0ee8df7011e5e153eae88 2023-07-11 18:17:21,890 DEBUG [RS:0;jenkins-hbase4:45273] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x702a8227 to 127.0.0.1:51347 2023-07-11 18:17:21,891 INFO [RS:1;jenkins-hbase4:39037] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:21,891 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bebe08844f5dda37e3ab23f14ae677f7, disabling compactions & flushes 2023-07-11 18:17:21,890 INFO [RS:2;jenkins-hbase4:42269] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 18:17:21,891 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7. 2023-07-11 18:17:21,891 INFO [RS:2;jenkins-hbase4:42269] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42269,1689099438497 2023-07-11 18:17:21,891 DEBUG [RS:2;jenkins-hbase4:42269] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x49cab2c7 to 127.0.0.1:51347 2023-07-11 18:17:21,891 DEBUG [RS:2;jenkins-hbase4:42269] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:21,891 DEBUG [RS:1;jenkins-hbase4:39037] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x287b0d17 to 127.0.0.1:51347 2023-07-11 18:17:21,891 DEBUG [RS:0;jenkins-hbase4:45273] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:21,892 INFO [RS:0;jenkins-hbase4:45273] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 18:17:21,892 INFO [RS:0;jenkins-hbase4:45273] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 18:17:21,892 DEBUG [RS:1;jenkins-hbase4:39037] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:21,892 INFO [RS:2;jenkins-hbase4:42269] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42269,1689099438497; all regions closed. 2023-07-11 18:17:21,891 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7. 2023-07-11 18:17:21,892 DEBUG [RS:2;jenkins-hbase4:42269] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-11 18:17:21,892 INFO [RS:1;jenkins-hbase4:39037] regionserver.HRegionServer(1474): Waiting on 3 regions to close 2023-07-11 18:17:21,892 INFO [RS:0;jenkins-hbase4:45273] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 18:17:21,892 DEBUG [RS:1;jenkins-hbase4:39037] regionserver.HRegionServer(1478): Online Regions={bebe08844f5dda37e3ab23f14ae677f7=hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7., 94077be50b5646d492a0b264c0b8a769=hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769., d3f4beb004c0ee8df7011e5e153eae88=hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88.} 2023-07-11 18:17:21,892 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7. after waiting 0 ms 2023-07-11 18:17:21,892 INFO [RS:0;jenkins-hbase4:45273] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-11 18:17:21,893 DEBUG [RS:1;jenkins-hbase4:39037] regionserver.HRegionServer(1504): Waiting on 94077be50b5646d492a0b264c0b8a769, bebe08844f5dda37e3ab23f14ae677f7, d3f4beb004c0ee8df7011e5e153eae88 2023-07-11 18:17:21,893 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7. 2023-07-11 18:17:21,893 INFO [RS:0;jenkins-hbase4:45273] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-11 18:17:21,893 DEBUG [RS:0;jenkins-hbase4:45273] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-11 18:17:21,893 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing bebe08844f5dda37e3ab23f14ae677f7 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-11 18:17:21,893 DEBUG [RS:0;jenkins-hbase4:45273] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-11 18:17:21,895 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-11 18:17:21,903 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-11 18:17:21,903 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-11 18:17:21,903 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-11 18:17:21,903 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-11 18:17:21,903 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-11 18:17:21,923 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:21,923 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:21,926 DEBUG [RS:2;jenkins-hbase4:42269] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/oldWALs 2023-07-11 18:17:21,926 INFO [RS:2;jenkins-hbase4:42269] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42269%2C1689099438497:(num 1689099439617) 2023-07-11 18:17:21,926 DEBUG [RS:2;jenkins-hbase4:42269] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:21,926 INFO [RS:2;jenkins-hbase4:42269] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:21,926 INFO [RS:2;jenkins-hbase4:42269] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-11 18:17:21,926 INFO [RS:2;jenkins-hbase4:42269] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 18:17:21,926 INFO [RS:2;jenkins-hbase4:42269] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 18:17:21,926 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 18:17:21,926 INFO [RS:2;jenkins-hbase4:42269] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 18:17:21,928 INFO [RS:2;jenkins-hbase4:42269] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42269 2023-07-11 18:17:21,939 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/rsgroup/bebe08844f5dda37e3ab23f14ae677f7/.tmp/m/7cc1228879974692b234b75dbc505cac 2023-07-11 18:17:21,943 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/.tmp/info/aea944981da648ddaa64d35f1b991c14 2023-07-11 18:17:21,947 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/rsgroup/bebe08844f5dda37e3ab23f14ae677f7/.tmp/m/7cc1228879974692b234b75dbc505cac as hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/rsgroup/bebe08844f5dda37e3ab23f14ae677f7/m/7cc1228879974692b234b75dbc505cac 2023-07-11 18:17:21,948 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for aea944981da648ddaa64d35f1b991c14 2023-07-11 18:17:21,953 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/rsgroup/bebe08844f5dda37e3ab23f14ae677f7/m/7cc1228879974692b234b75dbc505cac, entries=1, sequenceid=7, filesize=4.9 K 2023-07-11 18:17:21,956 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for bebe08844f5dda37e3ab23f14ae677f7 in 63ms, sequenceid=7, compaction requested=false 2023-07-11 18:17:21,956 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-11 18:17:21,962 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:21,972 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/rsgroup/bebe08844f5dda37e3ab23f14ae677f7/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-11 18:17:21,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 18:17:21,974 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7. 2023-07-11 18:17:21,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bebe08844f5dda37e3ab23f14ae677f7: 2023-07-11 18:17:21,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689099440418.bebe08844f5dda37e3ab23f14ae677f7. 2023-07-11 18:17:21,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 94077be50b5646d492a0b264c0b8a769, disabling compactions & flushes 2023-07-11 18:17:21,974 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769. 2023-07-11 18:17:21,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769. 2023-07-11 18:17:21,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769. after waiting 0 ms 2023-07-11 18:17:21,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769. 2023-07-11 18:17:21,975 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/.tmp/rep_barrier/81c6c71eb6dd4b10b66bd61f8f109351 2023-07-11 18:17:21,979 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/quota/94077be50b5646d492a0b264c0b8a769/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:17:21,980 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769. 2023-07-11 18:17:21,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 94077be50b5646d492a0b264c0b8a769: 2023-07-11 18:17:21,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689099440721.94077be50b5646d492a0b264c0b8a769. 2023-07-11 18:17:21,980 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing d3f4beb004c0ee8df7011e5e153eae88, disabling compactions & flushes 2023-07-11 18:17:21,980 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88. 2023-07-11 18:17:21,981 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88. 2023-07-11 18:17:21,981 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88. after waiting 0 ms 2023-07-11 18:17:21,981 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88. 2023-07-11 18:17:21,981 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing d3f4beb004c0ee8df7011e5e153eae88 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-11 18:17:21,982 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 81c6c71eb6dd4b10b66bd61f8f109351 2023-07-11 18:17:21,998 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:45273-0x101559a8a700001, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42269,1689099438497 2023-07-11 18:17:21,999 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:45273-0x101559a8a700001, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:21,999 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:42269-0x101559a8a700003, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42269,1689099438497 2023-07-11 18:17:21,999 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:21,999 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:42269-0x101559a8a700003, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:21,999 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:39037-0x101559a8a700002, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42269,1689099438497 2023-07-11 18:17:21,999 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:39037-0x101559a8a700002, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:21,999 ERROR [Listener at localhost/45067-EventThread] zookeeper.ClientCnxn$EventThread(537): Error while calling watcher java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@15bdc637 rejected from java.util.concurrent.ThreadPoolExecutor@11825058[Shutting down, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 5] at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112) at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678) at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:603) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510) 2023-07-11 18:17:22,001 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/.tmp/table/12d7c5ffe6e74809a2e007f86a1807ad 2023-07-11 18:17:22,001 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/namespace/d3f4beb004c0ee8df7011e5e153eae88/.tmp/info/eb794ac3ec3a42ad815c501995e73f89 2023-07-11 18:17:22,002 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42269,1689099438497] 2023-07-11 18:17:22,002 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42269,1689099438497; numProcessing=1 2023-07-11 18:17:22,004 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42269,1689099438497 already deleted, retry=false 2023-07-11 18:17:22,004 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42269,1689099438497 expired; onlineServers=2 2023-07-11 18:17:22,009 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eb794ac3ec3a42ad815c501995e73f89 2023-07-11 18:17:22,009 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 12d7c5ffe6e74809a2e007f86a1807ad 2023-07-11 18:17:22,010 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/namespace/d3f4beb004c0ee8df7011e5e153eae88/.tmp/info/eb794ac3ec3a42ad815c501995e73f89 as hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/namespace/d3f4beb004c0ee8df7011e5e153eae88/info/eb794ac3ec3a42ad815c501995e73f89 2023-07-11 18:17:22,010 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/.tmp/info/aea944981da648ddaa64d35f1b991c14 as hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/info/aea944981da648ddaa64d35f1b991c14 2023-07-11 18:17:22,015 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for eb794ac3ec3a42ad815c501995e73f89 2023-07-11 18:17:22,015 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for aea944981da648ddaa64d35f1b991c14 2023-07-11 18:17:22,015 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/namespace/d3f4beb004c0ee8df7011e5e153eae88/info/eb794ac3ec3a42ad815c501995e73f89, entries=3, sequenceid=8, filesize=5.0 K 2023-07-11 18:17:22,015 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/info/aea944981da648ddaa64d35f1b991c14, entries=32, sequenceid=31, filesize=8.5 K 2023-07-11 18:17:22,016 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for d3f4beb004c0ee8df7011e5e153eae88 in 35ms, sequenceid=8, compaction requested=false 2023-07-11 18:17:22,016 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-11 18:17:22,016 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/.tmp/rep_barrier/81c6c71eb6dd4b10b66bd61f8f109351 as hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/rep_barrier/81c6c71eb6dd4b10b66bd61f8f109351 2023-07-11 18:17:22,027 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/namespace/d3f4beb004c0ee8df7011e5e153eae88/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-11 18:17:22,028 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 81c6c71eb6dd4b10b66bd61f8f109351 2023-07-11 18:17:22,028 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/rep_barrier/81c6c71eb6dd4b10b66bd61f8f109351, entries=1, sequenceid=31, filesize=4.9 K 2023-07-11 18:17:22,029 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/.tmp/table/12d7c5ffe6e74809a2e007f86a1807ad as hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/table/12d7c5ffe6e74809a2e007f86a1807ad 2023-07-11 18:17:22,030 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88. 2023-07-11 18:17:22,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for d3f4beb004c0ee8df7011e5e153eae88: 2023-07-11 18:17:22,031 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689099440280.d3f4beb004c0ee8df7011e5e153eae88. 2023-07-11 18:17:22,035 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 12d7c5ffe6e74809a2e007f86a1807ad 2023-07-11 18:17:22,035 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/table/12d7c5ffe6e74809a2e007f86a1807ad, entries=8, sequenceid=31, filesize=5.2 K 2023-07-11 18:17:22,036 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 133ms, sequenceid=31, compaction requested=false 2023-07-11 18:17:22,036 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-11 18:17:22,045 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-11 18:17:22,046 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 18:17:22,046 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-11 18:17:22,047 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-11 18:17:22,047 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-11 18:17:22,093 INFO [RS:1;jenkins-hbase4:39037] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39037,1689099438320; all regions closed. 2023-07-11 18:17:22,093 DEBUG [RS:1;jenkins-hbase4:39037] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-11 18:17:22,099 DEBUG [RS:1;jenkins-hbase4:39037] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/oldWALs 2023-07-11 18:17:22,099 INFO [RS:1;jenkins-hbase4:39037] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39037%2C1689099438320:(num 1689099439618) 2023-07-11 18:17:22,099 DEBUG [RS:1;jenkins-hbase4:39037] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:22,099 INFO [RS:1;jenkins-hbase4:39037] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:22,100 INFO [RS:1;jenkins-hbase4:39037] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-11 18:17:22,100 INFO [RS:1;jenkins-hbase4:39037] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 18:17:22,100 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 18:17:22,100 INFO [RS:1;jenkins-hbase4:39037] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 18:17:22,100 INFO [RS:1;jenkins-hbase4:39037] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 18:17:22,101 INFO [RS:1;jenkins-hbase4:39037] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39037 2023-07-11 18:17:22,103 INFO [RS:0;jenkins-hbase4:45273] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45273,1689099438153; all regions closed. 2023-07-11 18:17:22,103 DEBUG [RS:0;jenkins-hbase4:45273] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-11 18:17:22,105 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:39037-0x101559a8a700002, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:22,105 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:22,105 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:45273-0x101559a8a700001, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39037,1689099438320 2023-07-11 18:17:22,105 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39037,1689099438320] 2023-07-11 18:17:22,106 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39037,1689099438320; numProcessing=2 2023-07-11 18:17:22,107 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39037,1689099438320 already deleted, retry=false 2023-07-11 18:17:22,107 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39037,1689099438320 expired; onlineServers=1 2023-07-11 18:17:22,111 DEBUG [RS:0;jenkins-hbase4:45273] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/oldWALs 2023-07-11 18:17:22,112 INFO [RS:0;jenkins-hbase4:45273] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45273%2C1689099438153.meta:.meta(num 1689099440160) 2023-07-11 18:17:22,117 DEBUG [RS:0;jenkins-hbase4:45273] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/oldWALs 2023-07-11 18:17:22,118 INFO [RS:0;jenkins-hbase4:45273] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45273%2C1689099438153:(num 1689099439613) 2023-07-11 18:17:22,118 DEBUG [RS:0;jenkins-hbase4:45273] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:22,118 INFO [RS:0;jenkins-hbase4:45273] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:22,118 INFO [RS:0;jenkins-hbase4:45273] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-11 18:17:22,118 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 18:17:22,119 INFO [RS:0;jenkins-hbase4:45273] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45273 2023-07-11 18:17:22,208 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:39037-0x101559a8a700002, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:22,208 INFO [RS:1;jenkins-hbase4:39037] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39037,1689099438320; zookeeper connection closed. 2023-07-11 18:17:22,208 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:39037-0x101559a8a700002, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:22,209 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@427a6eb4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@427a6eb4 2023-07-11 18:17:22,210 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:45273-0x101559a8a700001, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45273,1689099438153 2023-07-11 18:17:22,210 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:22,212 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45273,1689099438153] 2023-07-11 18:17:22,212 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45273,1689099438153; numProcessing=3 2023-07-11 18:17:22,213 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45273,1689099438153 already deleted, retry=false 2023-07-11 18:17:22,213 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45273,1689099438153 expired; onlineServers=0 2023-07-11 18:17:22,213 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40609,1689099437960' ***** 2023-07-11 18:17:22,213 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-11 18:17:22,213 DEBUG [M:0;jenkins-hbase4:40609] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@72bf1e3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-11 18:17:22,214 INFO [M:0;jenkins-hbase4:40609] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 18:17:22,215 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-11 18:17:22,215 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:22,216 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 18:17:22,216 INFO [M:0;jenkins-hbase4:40609] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3b168a60{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-11 18:17:22,216 INFO [M:0;jenkins-hbase4:40609] server.AbstractConnector(383): Stopped ServerConnector@37863041{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 18:17:22,216 INFO [M:0;jenkins-hbase4:40609] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 18:17:22,217 INFO [M:0;jenkins-hbase4:40609] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@9246e58{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 18:17:22,217 INFO [M:0;jenkins-hbase4:40609] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@625ee407{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/hadoop.log.dir/,STOPPED} 2023-07-11 18:17:22,217 INFO [M:0;jenkins-hbase4:40609] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40609,1689099437960 2023-07-11 18:17:22,217 INFO [M:0;jenkins-hbase4:40609] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40609,1689099437960; all regions closed. 2023-07-11 18:17:22,217 DEBUG [M:0;jenkins-hbase4:40609] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:22,217 INFO [M:0;jenkins-hbase4:40609] master.HMaster(1491): Stopping master jetty server 2023-07-11 18:17:22,218 INFO [M:0;jenkins-hbase4:40609] server.AbstractConnector(383): Stopped ServerConnector@57910c14{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 18:17:22,218 DEBUG [M:0;jenkins-hbase4:40609] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-11 18:17:22,218 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-11 18:17:22,218 DEBUG [M:0;jenkins-hbase4:40609] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-11 18:17:22,218 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689099439330] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689099439330,5,FailOnTimeoutGroup] 2023-07-11 18:17:22,219 INFO [M:0;jenkins-hbase4:40609] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-11 18:17:22,219 INFO [M:0;jenkins-hbase4:40609] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-11 18:17:22,218 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689099439330] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689099439330,5,FailOnTimeoutGroup] 2023-07-11 18:17:22,220 INFO [M:0;jenkins-hbase4:40609] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-11 18:17:22,220 DEBUG [M:0;jenkins-hbase4:40609] master.HMaster(1512): Stopping service threads 2023-07-11 18:17:22,220 INFO [M:0;jenkins-hbase4:40609] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-11 18:17:22,221 ERROR [M:0;jenkins-hbase4:40609] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-11 18:17:22,221 INFO [M:0;jenkins-hbase4:40609] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-11 18:17:22,221 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-11 18:17:22,221 DEBUG [M:0;jenkins-hbase4:40609] zookeeper.ZKUtil(398): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-11 18:17:22,221 WARN [M:0;jenkins-hbase4:40609] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-11 18:17:22,221 INFO [M:0;jenkins-hbase4:40609] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-11 18:17:22,222 INFO [M:0;jenkins-hbase4:40609] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-11 18:17:22,222 DEBUG [M:0;jenkins-hbase4:40609] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-11 18:17:22,222 INFO [M:0;jenkins-hbase4:40609] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:22,222 DEBUG [M:0;jenkins-hbase4:40609] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:22,222 DEBUG [M:0;jenkins-hbase4:40609] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-11 18:17:22,222 DEBUG [M:0;jenkins-hbase4:40609] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:22,222 INFO [M:0;jenkins-hbase4:40609] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=93.00 KB heapSize=109.13 KB 2023-07-11 18:17:22,234 INFO [M:0;jenkins-hbase4:40609] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=93.00 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/149ed0e336fe46278cbfd1f819dcc640 2023-07-11 18:17:22,239 DEBUG [M:0;jenkins-hbase4:40609] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/149ed0e336fe46278cbfd1f819dcc640 as hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/149ed0e336fe46278cbfd1f819dcc640 2023-07-11 18:17:22,245 INFO [M:0;jenkins-hbase4:40609] regionserver.HStore(1080): Added hdfs://localhost:33083/user/jenkins/test-data/e5699b06-12d8-f4c8-fb63-af8a4f5eca1c/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/149ed0e336fe46278cbfd1f819dcc640, entries=24, sequenceid=194, filesize=12.4 K 2023-07-11 18:17:22,245 INFO [M:0;jenkins-hbase4:40609] regionserver.HRegion(2948): Finished flush of dataSize ~93.00 KB/95233, heapSize ~109.12 KB/111736, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 23ms, sequenceid=194, compaction requested=false 2023-07-11 18:17:22,247 INFO [M:0;jenkins-hbase4:40609] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:22,247 DEBUG [M:0;jenkins-hbase4:40609] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-11 18:17:22,252 INFO [M:0;jenkins-hbase4:40609] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-11 18:17:22,252 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 18:17:22,252 INFO [M:0;jenkins-hbase4:40609] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40609 2023-07-11 18:17:22,255 DEBUG [M:0;jenkins-hbase4:40609] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,40609,1689099437960 already deleted, retry=false 2023-07-11 18:17:22,360 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:22,360 INFO [M:0;jenkins-hbase4:40609] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40609,1689099437960; zookeeper connection closed. 2023-07-11 18:17:22,360 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): master:40609-0x101559a8a700000, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:22,460 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:45273-0x101559a8a700001, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:22,460 INFO [RS:0;jenkins-hbase4:45273] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45273,1689099438153; zookeeper connection closed. 2023-07-11 18:17:22,460 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:45273-0x101559a8a700001, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:22,460 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7e3c5d33] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7e3c5d33 2023-07-11 18:17:22,560 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:42269-0x101559a8a700003, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:22,560 INFO [RS:2;jenkins-hbase4:42269] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42269,1689099438497; zookeeper connection closed. 2023-07-11 18:17:22,560 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): regionserver:42269-0x101559a8a700003, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:22,561 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5eec24d4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5eec24d4 2023-07-11 18:17:22,561 INFO [Listener at localhost/45067] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-11 18:17:22,561 WARN [Listener at localhost/45067] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-11 18:17:22,565 INFO [Listener at localhost/45067] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 18:17:22,670 WARN [BP-1624959893-172.31.14.131-1689099437175 heartbeating to localhost/127.0.0.1:33083] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-11 18:17:22,670 WARN [BP-1624959893-172.31.14.131-1689099437175 heartbeating to localhost/127.0.0.1:33083] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1624959893-172.31.14.131-1689099437175 (Datanode Uuid 19532065-78a7-45e7-b1e1-ad3572ab15d7) service to localhost/127.0.0.1:33083 2023-07-11 18:17:22,671 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/cluster_56a3dce2-f7b5-2cfe-4fd6-13f560ee9feb/dfs/data/data5/current/BP-1624959893-172.31.14.131-1689099437175] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 18:17:22,671 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/cluster_56a3dce2-f7b5-2cfe-4fd6-13f560ee9feb/dfs/data/data6/current/BP-1624959893-172.31.14.131-1689099437175] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 18:17:22,673 WARN [Listener at localhost/45067] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-11 18:17:22,680 INFO [Listener at localhost/45067] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 18:17:22,784 WARN [BP-1624959893-172.31.14.131-1689099437175 heartbeating to localhost/127.0.0.1:33083] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-11 18:17:22,785 WARN [BP-1624959893-172.31.14.131-1689099437175 heartbeating to localhost/127.0.0.1:33083] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1624959893-172.31.14.131-1689099437175 (Datanode Uuid 79ed39e4-9e95-4d5e-9b06-a0c1a820c28d) service to localhost/127.0.0.1:33083 2023-07-11 18:17:22,785 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/cluster_56a3dce2-f7b5-2cfe-4fd6-13f560ee9feb/dfs/data/data3/current/BP-1624959893-172.31.14.131-1689099437175] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 18:17:22,786 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/cluster_56a3dce2-f7b5-2cfe-4fd6-13f560ee9feb/dfs/data/data4/current/BP-1624959893-172.31.14.131-1689099437175] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 18:17:22,787 WARN [Listener at localhost/45067] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-11 18:17:22,791 INFO [Listener at localhost/45067] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 18:17:22,895 WARN [BP-1624959893-172.31.14.131-1689099437175 heartbeating to localhost/127.0.0.1:33083] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-11 18:17:22,895 WARN [BP-1624959893-172.31.14.131-1689099437175 heartbeating to localhost/127.0.0.1:33083] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1624959893-172.31.14.131-1689099437175 (Datanode Uuid 628cb278-a110-45f6-8968-50d98a90d79b) service to localhost/127.0.0.1:33083 2023-07-11 18:17:22,895 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/cluster_56a3dce2-f7b5-2cfe-4fd6-13f560ee9feb/dfs/data/data1/current/BP-1624959893-172.31.14.131-1689099437175] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 18:17:22,896 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/cluster_56a3dce2-f7b5-2cfe-4fd6-13f560ee9feb/dfs/data/data2/current/BP-1624959893-172.31.14.131-1689099437175] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 18:17:22,907 INFO [Listener at localhost/45067] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 18:17:23,023 INFO [Listener at localhost/45067] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-11 18:17:23,049 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-11 18:17:23,049 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-11 18:17:23,049 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/hadoop.log.dir so I do NOT create it in target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84 2023-07-11 18:17:23,049 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/152999f3-93a0-bfaa-5734-e2613ebfdd1b/hadoop.tmp.dir so I do NOT create it in target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84 2023-07-11 18:17:23,049 INFO [Listener at localhost/45067] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5, deleteOnExit=true 2023-07-11 18:17:23,050 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-11 18:17:23,050 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/test.cache.data in system properties and HBase conf 2023-07-11 18:17:23,050 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/hadoop.tmp.dir in system properties and HBase conf 2023-07-11 18:17:23,050 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/hadoop.log.dir in system properties and HBase conf 2023-07-11 18:17:23,050 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-11 18:17:23,050 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-11 18:17:23,050 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-11 18:17:23,050 DEBUG [Listener at localhost/45067] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-11 18:17:23,051 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-11 18:17:23,051 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-11 18:17:23,051 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-11 18:17:23,051 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-11 18:17:23,051 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-11 18:17:23,051 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-11 18:17:23,051 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-11 18:17:23,051 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-11 18:17:23,051 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-11 18:17:23,051 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/nfs.dump.dir in system properties and HBase conf 2023-07-11 18:17:23,052 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/java.io.tmpdir in system properties and HBase conf 2023-07-11 18:17:23,052 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-11 18:17:23,052 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-11 18:17:23,052 INFO [Listener at localhost/45067] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-11 18:17:23,056 WARN [Listener at localhost/45067] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-11 18:17:23,056 WARN [Listener at localhost/45067] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-11 18:17:23,095 WARN [Listener at localhost/45067] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 18:17:23,097 INFO [Listener at localhost/45067] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 18:17:23,102 INFO [Listener at localhost/45067] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/java.io.tmpdir/Jetty_localhost_41161_hdfs____.wf5lc7/webapp 2023-07-11 18:17:23,121 DEBUG [Listener at localhost/45067-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x101559a8a70000a, quorum=127.0.0.1:51347, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-11 18:17:23,121 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x101559a8a70000a, quorum=127.0.0.1:51347, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-11 18:17:23,196 INFO [Listener at localhost/45067] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41161 2023-07-11 18:17:23,200 WARN [Listener at localhost/45067] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-11 18:17:23,201 WARN [Listener at localhost/45067] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-11 18:17:23,246 WARN [Listener at localhost/43601] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 18:17:23,262 WARN [Listener at localhost/43601] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-11 18:17:23,265 WARN [Listener at localhost/43601] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 18:17:23,267 INFO [Listener at localhost/43601] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 18:17:23,273 INFO [Listener at localhost/43601] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/java.io.tmpdir/Jetty_localhost_33267_datanode____26vp16/webapp 2023-07-11 18:17:23,367 INFO [Listener at localhost/43601] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33267 2023-07-11 18:17:23,377 WARN [Listener at localhost/45957] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 18:17:23,391 WARN [Listener at localhost/45957] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-11 18:17:23,393 WARN [Listener at localhost/45957] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 18:17:23,394 INFO [Listener at localhost/45957] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 18:17:23,397 INFO [Listener at localhost/45957] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/java.io.tmpdir/Jetty_localhost_33419_datanode____d4hv4z/webapp 2023-07-11 18:17:23,489 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x11581a3198f1b7ee: Processing first storage report for DS-4de7cb8d-f313-4791-bb16-a91a9507048d from datanode 1bb41cfb-ff7f-48c3-974d-e8fbb34ed2b6 2023-07-11 18:17:23,489 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x11581a3198f1b7ee: from storage DS-4de7cb8d-f313-4791-bb16-a91a9507048d node DatanodeRegistration(127.0.0.1:42183, datanodeUuid=1bb41cfb-ff7f-48c3-974d-e8fbb34ed2b6, infoPort=35251, infoSecurePort=0, ipcPort=45957, storageInfo=lv=-57;cid=testClusterID;nsid=60118177;c=1689099443058), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 18:17:23,489 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x11581a3198f1b7ee: Processing first storage report for DS-7baf8332-b175-4405-8ec0-00ad2b95b9b0 from datanode 1bb41cfb-ff7f-48c3-974d-e8fbb34ed2b6 2023-07-11 18:17:23,489 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x11581a3198f1b7ee: from storage DS-7baf8332-b175-4405-8ec0-00ad2b95b9b0 node DatanodeRegistration(127.0.0.1:42183, datanodeUuid=1bb41cfb-ff7f-48c3-974d-e8fbb34ed2b6, infoPort=35251, infoSecurePort=0, ipcPort=45957, storageInfo=lv=-57;cid=testClusterID;nsid=60118177;c=1689099443058), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 18:17:23,506 INFO [Listener at localhost/45957] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33419 2023-07-11 18:17:23,513 WARN [Listener at localhost/46453] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 18:17:23,534 WARN [Listener at localhost/46453] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-11 18:17:23,536 WARN [Listener at localhost/46453] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-11 18:17:23,537 INFO [Listener at localhost/46453] log.Slf4jLog(67): jetty-6.1.26 2023-07-11 18:17:23,541 INFO [Listener at localhost/46453] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/java.io.tmpdir/Jetty_localhost_37609_datanode____.xgc98e/webapp 2023-07-11 18:17:23,622 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8505181f56020ead: Processing first storage report for DS-b95f6d85-07d1-4bf5-931c-7526c43d5d29 from datanode 2916cb80-a26a-4bb5-9342-deefdd8add05 2023-07-11 18:17:23,622 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8505181f56020ead: from storage DS-b95f6d85-07d1-4bf5-931c-7526c43d5d29 node DatanodeRegistration(127.0.0.1:43963, datanodeUuid=2916cb80-a26a-4bb5-9342-deefdd8add05, infoPort=46839, infoSecurePort=0, ipcPort=46453, storageInfo=lv=-57;cid=testClusterID;nsid=60118177;c=1689099443058), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 18:17:23,622 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x8505181f56020ead: Processing first storage report for DS-a6817ac7-24e0-41c6-b6e3-f6b36dd84764 from datanode 2916cb80-a26a-4bb5-9342-deefdd8add05 2023-07-11 18:17:23,622 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x8505181f56020ead: from storage DS-a6817ac7-24e0-41c6-b6e3-f6b36dd84764 node DatanodeRegistration(127.0.0.1:43963, datanodeUuid=2916cb80-a26a-4bb5-9342-deefdd8add05, infoPort=46839, infoSecurePort=0, ipcPort=46453, storageInfo=lv=-57;cid=testClusterID;nsid=60118177;c=1689099443058), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-11 18:17:23,636 INFO [Listener at localhost/46453] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37609 2023-07-11 18:17:23,643 WARN [Listener at localhost/41775] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-11 18:17:23,762 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x39411381d105da2b: Processing first storage report for DS-c7742031-d772-4baf-97b2-c6c1d38992ba from datanode 87545289-74e9-49cc-8704-64bb6212230f 2023-07-11 18:17:23,762 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x39411381d105da2b: from storage DS-c7742031-d772-4baf-97b2-c6c1d38992ba node DatanodeRegistration(127.0.0.1:35255, datanodeUuid=87545289-74e9-49cc-8704-64bb6212230f, infoPort=35793, infoSecurePort=0, ipcPort=41775, storageInfo=lv=-57;cid=testClusterID;nsid=60118177;c=1689099443058), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 18:17:23,762 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x39411381d105da2b: Processing first storage report for DS-4944b638-5cd0-4a7b-96ae-b00ccc232fe2 from datanode 87545289-74e9-49cc-8704-64bb6212230f 2023-07-11 18:17:23,762 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x39411381d105da2b: from storage DS-4944b638-5cd0-4a7b-96ae-b00ccc232fe2 node DatanodeRegistration(127.0.0.1:35255, datanodeUuid=87545289-74e9-49cc-8704-64bb6212230f, infoPort=35793, infoSecurePort=0, ipcPort=41775, storageInfo=lv=-57;cid=testClusterID;nsid=60118177;c=1689099443058), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-11 18:17:23,851 DEBUG [Listener at localhost/41775] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84 2023-07-11 18:17:23,853 INFO [Listener at localhost/41775] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/zookeeper_0, clientPort=50731, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-11 18:17:23,854 INFO [Listener at localhost/41775] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=50731 2023-07-11 18:17:23,855 INFO [Listener at localhost/41775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:23,856 INFO [Listener at localhost/41775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:23,877 INFO [Listener at localhost/41775] util.FSUtils(471): Created version file at hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845 with version=8 2023-07-11 18:17:23,877 INFO [Listener at localhost/41775] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:40365/user/jenkins/test-data/6142f20e-a8b4-4dbe-fccf-db33ad71e582/hbase-staging 2023-07-11 18:17:23,878 DEBUG [Listener at localhost/41775] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-11 18:17:23,878 DEBUG [Listener at localhost/41775] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-11 18:17:23,878 DEBUG [Listener at localhost/41775] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-11 18:17:23,878 DEBUG [Listener at localhost/41775] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-11 18:17:23,879 INFO [Listener at localhost/41775] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-11 18:17:23,879 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:23,879 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:23,879 INFO [Listener at localhost/41775] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 18:17:23,879 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:23,879 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 18:17:23,879 INFO [Listener at localhost/41775] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 18:17:23,880 INFO [Listener at localhost/41775] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38477 2023-07-11 18:17:23,880 INFO [Listener at localhost/41775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:23,882 INFO [Listener at localhost/41775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:23,883 INFO [Listener at localhost/41775] zookeeper.RecoverableZooKeeper(93): Process identifier=master:38477 connecting to ZooKeeper ensemble=127.0.0.1:50731 2023-07-11 18:17:23,890 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:384770x0, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 18:17:23,892 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:38477-0x101559aa18f0000 connected 2023-07-11 18:17:23,917 DEBUG [Listener at localhost/41775] zookeeper.ZKUtil(164): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 18:17:23,917 DEBUG [Listener at localhost/41775] zookeeper.ZKUtil(164): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:23,918 DEBUG [Listener at localhost/41775] zookeeper.ZKUtil(164): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 18:17:23,921 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38477 2023-07-11 18:17:23,922 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38477 2023-07-11 18:17:23,925 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38477 2023-07-11 18:17:23,926 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38477 2023-07-11 18:17:23,926 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38477 2023-07-11 18:17:23,928 INFO [Listener at localhost/41775] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 18:17:23,928 INFO [Listener at localhost/41775] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 18:17:23,928 INFO [Listener at localhost/41775] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 18:17:23,928 INFO [Listener at localhost/41775] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-11 18:17:23,929 INFO [Listener at localhost/41775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 18:17:23,929 INFO [Listener at localhost/41775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 18:17:23,929 INFO [Listener at localhost/41775] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 18:17:23,929 INFO [Listener at localhost/41775] http.HttpServer(1146): Jetty bound to port 46721 2023-07-11 18:17:23,929 INFO [Listener at localhost/41775] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 18:17:23,947 INFO [Listener at localhost/41775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:23,947 INFO [Listener at localhost/41775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@20f20edd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/hadoop.log.dir/,AVAILABLE} 2023-07-11 18:17:23,948 INFO [Listener at localhost/41775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:23,948 INFO [Listener at localhost/41775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@11dbe345{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 18:17:24,065 INFO [Listener at localhost/41775] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 18:17:24,066 INFO [Listener at localhost/41775] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 18:17:24,066 INFO [Listener at localhost/41775] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 18:17:24,066 INFO [Listener at localhost/41775] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-11 18:17:24,067 INFO [Listener at localhost/41775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:24,068 INFO [Listener at localhost/41775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@17d53e68{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/java.io.tmpdir/jetty-0_0_0_0-46721-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8662365612602129717/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-11 18:17:24,069 INFO [Listener at localhost/41775] server.AbstractConnector(333): Started ServerConnector@1f0d357d{HTTP/1.1, (http/1.1)}{0.0.0.0:46721} 2023-07-11 18:17:24,069 INFO [Listener at localhost/41775] server.Server(415): Started @44419ms 2023-07-11 18:17:24,069 INFO [Listener at localhost/41775] master.HMaster(444): hbase.rootdir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845, hbase.cluster.distributed=false 2023-07-11 18:17:24,082 INFO [Listener at localhost/41775] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-11 18:17:24,083 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:24,083 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:24,083 INFO [Listener at localhost/41775] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 18:17:24,083 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:24,083 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 18:17:24,083 INFO [Listener at localhost/41775] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 18:17:24,085 INFO [Listener at localhost/41775] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33705 2023-07-11 18:17:24,085 INFO [Listener at localhost/41775] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 18:17:24,086 DEBUG [Listener at localhost/41775] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 18:17:24,086 INFO [Listener at localhost/41775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:24,087 INFO [Listener at localhost/41775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:24,088 INFO [Listener at localhost/41775] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33705 connecting to ZooKeeper ensemble=127.0.0.1:50731 2023-07-11 18:17:24,092 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:337050x0, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 18:17:24,094 DEBUG [Listener at localhost/41775] zookeeper.ZKUtil(164): regionserver:337050x0, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 18:17:24,094 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33705-0x101559aa18f0001 connected 2023-07-11 18:17:24,095 DEBUG [Listener at localhost/41775] zookeeper.ZKUtil(164): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:24,095 DEBUG [Listener at localhost/41775] zookeeper.ZKUtil(164): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 18:17:24,098 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33705 2023-07-11 18:17:24,099 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33705 2023-07-11 18:17:24,099 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33705 2023-07-11 18:17:24,099 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33705 2023-07-11 18:17:24,099 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33705 2023-07-11 18:17:24,101 INFO [Listener at localhost/41775] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 18:17:24,101 INFO [Listener at localhost/41775] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 18:17:24,102 INFO [Listener at localhost/41775] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 18:17:24,102 INFO [Listener at localhost/41775] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 18:17:24,102 INFO [Listener at localhost/41775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 18:17:24,102 INFO [Listener at localhost/41775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 18:17:24,103 INFO [Listener at localhost/41775] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 18:17:24,103 INFO [Listener at localhost/41775] http.HttpServer(1146): Jetty bound to port 36451 2023-07-11 18:17:24,103 INFO [Listener at localhost/41775] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 18:17:24,107 INFO [Listener at localhost/41775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:24,107 INFO [Listener at localhost/41775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@dfe7868{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/hadoop.log.dir/,AVAILABLE} 2023-07-11 18:17:24,107 INFO [Listener at localhost/41775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:24,107 INFO [Listener at localhost/41775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@33dbfe01{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 18:17:24,220 INFO [Listener at localhost/41775] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 18:17:24,221 INFO [Listener at localhost/41775] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 18:17:24,221 INFO [Listener at localhost/41775] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 18:17:24,222 INFO [Listener at localhost/41775] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-11 18:17:24,222 INFO [Listener at localhost/41775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:24,223 INFO [Listener at localhost/41775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@29a4f6d2{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/java.io.tmpdir/jetty-0_0_0_0-36451-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3503982972032511324/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:17:24,224 INFO [Listener at localhost/41775] server.AbstractConnector(333): Started ServerConnector@2034b22a{HTTP/1.1, (http/1.1)}{0.0.0.0:36451} 2023-07-11 18:17:24,224 INFO [Listener at localhost/41775] server.Server(415): Started @44574ms 2023-07-11 18:17:24,235 INFO [Listener at localhost/41775] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-11 18:17:24,236 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:24,236 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:24,236 INFO [Listener at localhost/41775] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 18:17:24,236 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:24,236 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 18:17:24,236 INFO [Listener at localhost/41775] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 18:17:24,237 INFO [Listener at localhost/41775] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37037 2023-07-11 18:17:24,237 INFO [Listener at localhost/41775] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 18:17:24,238 DEBUG [Listener at localhost/41775] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 18:17:24,239 INFO [Listener at localhost/41775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:24,239 INFO [Listener at localhost/41775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:24,240 INFO [Listener at localhost/41775] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37037 connecting to ZooKeeper ensemble=127.0.0.1:50731 2023-07-11 18:17:24,244 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:370370x0, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 18:17:24,245 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37037-0x101559aa18f0002 connected 2023-07-11 18:17:24,245 DEBUG [Listener at localhost/41775] zookeeper.ZKUtil(164): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 18:17:24,246 DEBUG [Listener at localhost/41775] zookeeper.ZKUtil(164): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:24,246 DEBUG [Listener at localhost/41775] zookeeper.ZKUtil(164): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 18:17:24,246 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37037 2023-07-11 18:17:24,247 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37037 2023-07-11 18:17:24,248 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37037 2023-07-11 18:17:24,248 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37037 2023-07-11 18:17:24,248 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37037 2023-07-11 18:17:24,250 INFO [Listener at localhost/41775] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 18:17:24,250 INFO [Listener at localhost/41775] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 18:17:24,250 INFO [Listener at localhost/41775] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 18:17:24,250 INFO [Listener at localhost/41775] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 18:17:24,250 INFO [Listener at localhost/41775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 18:17:24,250 INFO [Listener at localhost/41775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 18:17:24,250 INFO [Listener at localhost/41775] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 18:17:24,251 INFO [Listener at localhost/41775] http.HttpServer(1146): Jetty bound to port 35253 2023-07-11 18:17:24,251 INFO [Listener at localhost/41775] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 18:17:24,255 INFO [Listener at localhost/41775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:24,255 INFO [Listener at localhost/41775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6fdedd33{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/hadoop.log.dir/,AVAILABLE} 2023-07-11 18:17:24,255 INFO [Listener at localhost/41775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:24,255 INFO [Listener at localhost/41775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6015bda8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 18:17:24,367 INFO [Listener at localhost/41775] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 18:17:24,367 INFO [Listener at localhost/41775] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 18:17:24,368 INFO [Listener at localhost/41775] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 18:17:24,368 INFO [Listener at localhost/41775] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-11 18:17:24,369 INFO [Listener at localhost/41775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:24,369 INFO [Listener at localhost/41775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4814edce{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/java.io.tmpdir/jetty-0_0_0_0-35253-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8887246882568657979/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:17:24,372 INFO [Listener at localhost/41775] server.AbstractConnector(333): Started ServerConnector@35af6de7{HTTP/1.1, (http/1.1)}{0.0.0.0:35253} 2023-07-11 18:17:24,372 INFO [Listener at localhost/41775] server.Server(415): Started @44722ms 2023-07-11 18:17:24,383 INFO [Listener at localhost/41775] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-11 18:17:24,383 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:24,383 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:24,383 INFO [Listener at localhost/41775] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 18:17:24,383 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:24,383 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 18:17:24,384 INFO [Listener at localhost/41775] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 18:17:24,384 INFO [Listener at localhost/41775] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41487 2023-07-11 18:17:24,385 INFO [Listener at localhost/41775] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 18:17:24,386 DEBUG [Listener at localhost/41775] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 18:17:24,386 INFO [Listener at localhost/41775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:24,387 INFO [Listener at localhost/41775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:24,388 INFO [Listener at localhost/41775] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41487 connecting to ZooKeeper ensemble=127.0.0.1:50731 2023-07-11 18:17:24,391 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:414870x0, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 18:17:24,393 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41487-0x101559aa18f0003 connected 2023-07-11 18:17:24,393 DEBUG [Listener at localhost/41775] zookeeper.ZKUtil(164): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 18:17:24,393 DEBUG [Listener at localhost/41775] zookeeper.ZKUtil(164): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:24,394 DEBUG [Listener at localhost/41775] zookeeper.ZKUtil(164): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 18:17:24,394 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41487 2023-07-11 18:17:24,396 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41487 2023-07-11 18:17:24,396 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41487 2023-07-11 18:17:24,397 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41487 2023-07-11 18:17:24,398 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41487 2023-07-11 18:17:24,400 INFO [Listener at localhost/41775] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 18:17:24,400 INFO [Listener at localhost/41775] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 18:17:24,400 INFO [Listener at localhost/41775] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 18:17:24,401 INFO [Listener at localhost/41775] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 18:17:24,401 INFO [Listener at localhost/41775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 18:17:24,401 INFO [Listener at localhost/41775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 18:17:24,401 INFO [Listener at localhost/41775] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 18:17:24,402 INFO [Listener at localhost/41775] http.HttpServer(1146): Jetty bound to port 37915 2023-07-11 18:17:24,402 INFO [Listener at localhost/41775] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 18:17:24,404 INFO [Listener at localhost/41775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:24,404 INFO [Listener at localhost/41775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@370f144d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/hadoop.log.dir/,AVAILABLE} 2023-07-11 18:17:24,405 INFO [Listener at localhost/41775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:24,405 INFO [Listener at localhost/41775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6b3604ec{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 18:17:24,519 INFO [Listener at localhost/41775] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 18:17:24,520 INFO [Listener at localhost/41775] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 18:17:24,520 INFO [Listener at localhost/41775] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 18:17:24,521 INFO [Listener at localhost/41775] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-11 18:17:24,522 INFO [Listener at localhost/41775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:24,523 INFO [Listener at localhost/41775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@68e7269f{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/java.io.tmpdir/jetty-0_0_0_0-37915-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3213846137412557359/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:17:24,524 INFO [Listener at localhost/41775] server.AbstractConnector(333): Started ServerConnector@3ed2d6d6{HTTP/1.1, (http/1.1)}{0.0.0.0:37915} 2023-07-11 18:17:24,525 INFO [Listener at localhost/41775] server.Server(415): Started @44875ms 2023-07-11 18:17:24,527 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 18:17:24,532 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@2a39c821{HTTP/1.1, (http/1.1)}{0.0.0.0:40771} 2023-07-11 18:17:24,532 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @44882ms 2023-07-11 18:17:24,532 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,38477,1689099443878 2023-07-11 18:17:24,533 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-11 18:17:24,534 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,38477,1689099443878 2023-07-11 18:17:24,536 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 18:17:24,536 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 18:17:24,536 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:24,536 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 18:17:24,536 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-11 18:17:24,539 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-11 18:17:24,540 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-11 18:17:24,540 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,38477,1689099443878 from backup master directory 2023-07-11 18:17:24,542 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,38477,1689099443878 2023-07-11 18:17:24,542 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-11 18:17:24,542 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 18:17:24,542 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,38477,1689099443878 2023-07-11 18:17:24,975 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/hbase.id with ID: 0a322711-7f42-487f-aa63-235ae6645494 2023-07-11 18:17:24,987 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:24,989 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:25,003 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x488cee0b to 127.0.0.1:50731 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:17:25,008 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1c3401fe, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:17:25,008 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 18:17:25,009 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-11 18:17:25,009 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 18:17:25,011 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/MasterData/data/master/store-tmp 2023-07-11 18:17:25,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:25,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-11 18:17:25,020 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:25,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:25,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-11 18:17:25,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:25,020 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:25,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-11 18:17:25,021 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/MasterData/WALs/jenkins-hbase4.apache.org,38477,1689099443878 2023-07-11 18:17:25,023 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38477%2C1689099443878, suffix=, logDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/MasterData/WALs/jenkins-hbase4.apache.org,38477,1689099443878, archiveDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/MasterData/oldWALs, maxLogs=10 2023-07-11 18:17:25,040 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43963,DS-b95f6d85-07d1-4bf5-931c-7526c43d5d29,DISK] 2023-07-11 18:17:25,040 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35255,DS-c7742031-d772-4baf-97b2-c6c1d38992ba,DISK] 2023-07-11 18:17:25,040 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42183,DS-4de7cb8d-f313-4791-bb16-a91a9507048d,DISK] 2023-07-11 18:17:25,042 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/MasterData/WALs/jenkins-hbase4.apache.org,38477,1689099443878/jenkins-hbase4.apache.org%2C38477%2C1689099443878.1689099445024 2023-07-11 18:17:25,042 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35255,DS-c7742031-d772-4baf-97b2-c6c1d38992ba,DISK], DatanodeInfoWithStorage[127.0.0.1:43963,DS-b95f6d85-07d1-4bf5-931c-7526c43d5d29,DISK], DatanodeInfoWithStorage[127.0.0.1:42183,DS-4de7cb8d-f313-4791-bb16-a91a9507048d,DISK]] 2023-07-11 18:17:25,042 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:25,042 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:25,042 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-11 18:17:25,042 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-11 18:17:25,045 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-11 18:17:25,046 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-11 18:17:25,046 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-11 18:17:25,047 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:25,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-11 18:17:25,048 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-11 18:17:25,051 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-11 18:17:25,053 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:25,054 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12033822240, jitterRate=0.12073703110218048}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:25,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-11 18:17:25,054 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-11 18:17:25,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-11 18:17:25,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-11 18:17:25,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-11 18:17:25,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-11 18:17:25,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-11 18:17:25,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-11 18:17:25,057 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-11 18:17:25,058 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-11 18:17:25,059 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-11 18:17:25,059 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-11 18:17:25,059 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-11 18:17:25,065 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:25,065 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-11 18:17:25,066 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-11 18:17:25,067 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-11 18:17:25,068 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:25,068 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:25,068 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:25,068 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:25,068 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:25,068 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,38477,1689099443878, sessionid=0x101559aa18f0000, setting cluster-up flag (Was=false) 2023-07-11 18:17:25,073 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:25,082 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-11 18:17:25,082 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38477,1689099443878 2023-07-11 18:17:25,087 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:25,090 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-11 18:17:25,091 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,38477,1689099443878 2023-07-11 18:17:25,092 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.hbase-snapshot/.tmp 2023-07-11 18:17:25,093 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-11 18:17:25,093 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-11 18:17:25,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-11 18:17:25,094 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38477,1689099443878] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 18:17:25,094 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-11 18:17:25,095 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-11 18:17:25,106 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-11 18:17:25,106 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-11 18:17:25,106 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-11 18:17:25,106 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-11 18:17:25,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-11 18:17:25,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-11 18:17:25,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-11 18:17:25,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-11 18:17:25,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-11 18:17:25,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-11 18:17:25,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689099475110 2023-07-11 18:17:25,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-11 18:17:25,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-11 18:17:25,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-11 18:17:25,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-11 18:17:25,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-11 18:17:25,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-11 18:17:25,111 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,115 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-11 18:17:25,116 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-11 18:17:25,116 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-11 18:17:25,116 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-11 18:17:25,116 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-11 18:17:25,116 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-11 18:17:25,116 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-11 18:17:25,117 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-11 18:17:25,123 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689099445116,5,FailOnTimeoutGroup] 2023-07-11 18:17:25,130 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689099445124,5,FailOnTimeoutGroup] 2023-07-11 18:17:25,131 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,132 INFO [RS:0;jenkins-hbase4:33705] regionserver.HRegionServer(951): ClusterId : 0a322711-7f42-487f-aa63-235ae6645494 2023-07-11 18:17:25,140 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-11 18:17:25,140 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,140 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,141 INFO [RS:1;jenkins-hbase4:37037] regionserver.HRegionServer(951): ClusterId : 0a322711-7f42-487f-aa63-235ae6645494 2023-07-11 18:17:25,151 DEBUG [RS:0;jenkins-hbase4:33705] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 18:17:25,154 DEBUG [RS:1;jenkins-hbase4:37037] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 18:17:25,157 INFO [RS:2;jenkins-hbase4:41487] regionserver.HRegionServer(951): ClusterId : 0a322711-7f42-487f-aa63-235ae6645494 2023-07-11 18:17:25,157 DEBUG [RS:2;jenkins-hbase4:41487] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 18:17:25,157 DEBUG [RS:0;jenkins-hbase4:33705] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 18:17:25,157 DEBUG [RS:0;jenkins-hbase4:33705] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 18:17:25,159 DEBUG [RS:1;jenkins-hbase4:37037] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 18:17:25,159 DEBUG [RS:1;jenkins-hbase4:37037] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 18:17:25,159 DEBUG [RS:2;jenkins-hbase4:41487] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 18:17:25,159 DEBUG [RS:2;jenkins-hbase4:41487] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 18:17:25,162 DEBUG [RS:0;jenkins-hbase4:33705] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 18:17:25,163 DEBUG [RS:1;jenkins-hbase4:37037] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 18:17:25,163 DEBUG [RS:2;jenkins-hbase4:41487] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 18:17:25,170 DEBUG [RS:0;jenkins-hbase4:33705] zookeeper.ReadOnlyZKClient(139): Connect 0x2f6c26b1 to 127.0.0.1:50731 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:17:25,171 DEBUG [RS:2;jenkins-hbase4:41487] zookeeper.ReadOnlyZKClient(139): Connect 0x05f70b4e to 127.0.0.1:50731 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:17:25,171 DEBUG [RS:1;jenkins-hbase4:37037] zookeeper.ReadOnlyZKClient(139): Connect 0x3b5fec2a to 127.0.0.1:50731 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:17:25,184 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-11 18:17:25,185 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-11 18:17:25,185 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845 2023-07-11 18:17:25,187 DEBUG [RS:0;jenkins-hbase4:33705] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@21abcd2c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:17:25,187 DEBUG [RS:1;jenkins-hbase4:37037] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6711d71a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:17:25,187 DEBUG [RS:0;jenkins-hbase4:33705] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@25944dc1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-11 18:17:25,187 DEBUG [RS:1;jenkins-hbase4:37037] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@a2ef787, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-11 18:17:25,191 DEBUG [RS:2;jenkins-hbase4:41487] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@206df272, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:17:25,191 DEBUG [RS:2;jenkins-hbase4:41487] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@34b88ea3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-11 18:17:25,197 DEBUG [RS:0;jenkins-hbase4:33705] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:33705 2023-07-11 18:17:25,197 DEBUG [RS:1;jenkins-hbase4:37037] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:37037 2023-07-11 18:17:25,197 INFO [RS:0;jenkins-hbase4:33705] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 18:17:25,197 INFO [RS:1;jenkins-hbase4:37037] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 18:17:25,197 INFO [RS:1;jenkins-hbase4:37037] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 18:17:25,197 INFO [RS:0;jenkins-hbase4:33705] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 18:17:25,197 DEBUG [RS:1;jenkins-hbase4:37037] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 18:17:25,197 DEBUG [RS:0;jenkins-hbase4:33705] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 18:17:25,198 INFO [RS:1;jenkins-hbase4:37037] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38477,1689099443878 with isa=jenkins-hbase4.apache.org/172.31.14.131:37037, startcode=1689099444235 2023-07-11 18:17:25,198 INFO [RS:0;jenkins-hbase4:33705] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38477,1689099443878 with isa=jenkins-hbase4.apache.org/172.31.14.131:33705, startcode=1689099444082 2023-07-11 18:17:25,199 DEBUG [RS:0;jenkins-hbase4:33705] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 18:17:25,199 DEBUG [RS:1;jenkins-hbase4:37037] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 18:17:25,202 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46523, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 18:17:25,202 DEBUG [RS:2;jenkins-hbase4:41487] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:41487 2023-07-11 18:17:25,204 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38477] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37037,1689099444235 2023-07-11 18:17:25,204 INFO [RS:2;jenkins-hbase4:41487] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 18:17:25,204 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38477,1689099443878] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 18:17:25,204 INFO [RS:2;jenkins-hbase4:41487] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 18:17:25,205 DEBUG [RS:2;jenkins-hbase4:41487] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 18:17:25,205 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38477,1689099443878] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-11 18:17:25,205 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54253, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 18:17:25,205 DEBUG [RS:1;jenkins-hbase4:37037] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845 2023-07-11 18:17:25,205 DEBUG [RS:1;jenkins-hbase4:37037] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43601 2023-07-11 18:17:25,205 DEBUG [RS:1;jenkins-hbase4:37037] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46721 2023-07-11 18:17:25,205 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38477] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33705,1689099444082 2023-07-11 18:17:25,205 INFO [RS:2;jenkins-hbase4:41487] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38477,1689099443878 with isa=jenkins-hbase4.apache.org/172.31.14.131:41487, startcode=1689099444383 2023-07-11 18:17:25,205 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38477,1689099443878] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 18:17:25,205 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38477,1689099443878] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-11 18:17:25,206 DEBUG [RS:2;jenkins-hbase4:41487] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 18:17:25,206 DEBUG [RS:0;jenkins-hbase4:33705] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845 2023-07-11 18:17:25,206 DEBUG [RS:0;jenkins-hbase4:33705] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43601 2023-07-11 18:17:25,206 DEBUG [RS:0;jenkins-hbase4:33705] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46721 2023-07-11 18:17:25,207 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51259, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 18:17:25,207 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:25,207 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38477] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:25,207 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38477,1689099443878] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 18:17:25,207 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38477,1689099443878] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-11 18:17:25,208 DEBUG [RS:2;jenkins-hbase4:41487] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845 2023-07-11 18:17:25,208 DEBUG [RS:2;jenkins-hbase4:41487] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43601 2023-07-11 18:17:25,208 DEBUG [RS:2;jenkins-hbase4:41487] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46721 2023-07-11 18:17:25,208 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:25,209 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-11 18:17:25,211 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/info 2023-07-11 18:17:25,211 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-11 18:17:25,212 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:25,212 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-11 18:17:25,214 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/rep_barrier 2023-07-11 18:17:25,214 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-11 18:17:25,215 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:25,215 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-11 18:17:25,215 DEBUG [RS:1;jenkins-hbase4:37037] zookeeper.ZKUtil(162): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37037,1689099444235 2023-07-11 18:17:25,216 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37037,1689099444235] 2023-07-11 18:17:25,216 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33705,1689099444082] 2023-07-11 18:17:25,216 WARN [RS:1;jenkins-hbase4:37037] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 18:17:25,216 INFO [RS:1;jenkins-hbase4:37037] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 18:17:25,216 DEBUG [RS:1;jenkins-hbase4:37037] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/WALs/jenkins-hbase4.apache.org,37037,1689099444235 2023-07-11 18:17:25,216 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/table 2023-07-11 18:17:25,218 DEBUG [RS:0;jenkins-hbase4:33705] zookeeper.ZKUtil(162): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33705,1689099444082 2023-07-11 18:17:25,218 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:25,218 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-11 18:17:25,218 WARN [RS:0;jenkins-hbase4:33705] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 18:17:25,219 DEBUG [RS:2;jenkins-hbase4:41487] zookeeper.ZKUtil(162): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:25,219 INFO [RS:0;jenkins-hbase4:33705] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 18:17:25,219 WARN [RS:2;jenkins-hbase4:41487] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 18:17:25,219 INFO [RS:2;jenkins-hbase4:41487] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 18:17:25,220 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:25,220 DEBUG [RS:0;jenkins-hbase4:33705] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/WALs/jenkins-hbase4.apache.org,33705,1689099444082 2023-07-11 18:17:25,220 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41487,1689099444383] 2023-07-11 18:17:25,220 DEBUG [RS:2;jenkins-hbase4:41487] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/WALs/jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:25,220 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740 2023-07-11 18:17:25,221 DEBUG [RS:1;jenkins-hbase4:37037] zookeeper.ZKUtil(162): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37037,1689099444235 2023-07-11 18:17:25,221 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740 2023-07-11 18:17:25,221 DEBUG [RS:1;jenkins-hbase4:37037] zookeeper.ZKUtil(162): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:25,222 DEBUG [RS:1;jenkins-hbase4:37037] zookeeper.ZKUtil(162): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33705,1689099444082 2023-07-11 18:17:25,222 DEBUG [RS:1;jenkins-hbase4:37037] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 18:17:25,222 INFO [RS:1;jenkins-hbase4:37037] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 18:17:25,231 INFO [RS:1;jenkins-hbase4:37037] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 18:17:25,231 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-11 18:17:25,231 DEBUG [RS:0;jenkins-hbase4:33705] zookeeper.ZKUtil(162): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37037,1689099444235 2023-07-11 18:17:25,232 DEBUG [RS:0;jenkins-hbase4:33705] zookeeper.ZKUtil(162): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:25,232 DEBUG [RS:2;jenkins-hbase4:41487] zookeeper.ZKUtil(162): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37037,1689099444235 2023-07-11 18:17:25,232 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-11 18:17:25,232 DEBUG [RS:0;jenkins-hbase4:33705] zookeeper.ZKUtil(162): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33705,1689099444082 2023-07-11 18:17:25,232 DEBUG [RS:2;jenkins-hbase4:41487] zookeeper.ZKUtil(162): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:25,233 DEBUG [RS:2;jenkins-hbase4:41487] zookeeper.ZKUtil(162): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33705,1689099444082 2023-07-11 18:17:25,233 DEBUG [RS:0;jenkins-hbase4:33705] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 18:17:25,233 INFO [RS:0;jenkins-hbase4:33705] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 18:17:25,234 DEBUG [RS:2;jenkins-hbase4:41487] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 18:17:25,234 INFO [RS:2;jenkins-hbase4:41487] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 18:17:25,235 INFO [RS:0;jenkins-hbase4:33705] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 18:17:25,235 INFO [RS:1;jenkins-hbase4:37037] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 18:17:25,235 INFO [RS:1;jenkins-hbase4:37037] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,239 INFO [RS:2;jenkins-hbase4:41487] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 18:17:25,239 INFO [RS:1;jenkins-hbase4:37037] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 18:17:25,243 INFO [RS:0;jenkins-hbase4:33705] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 18:17:25,243 INFO [RS:0;jenkins-hbase4:33705] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,243 INFO [RS:2;jenkins-hbase4:41487] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 18:17:25,243 INFO [RS:0;jenkins-hbase4:33705] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 18:17:25,243 INFO [RS:2;jenkins-hbase4:41487] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,244 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:25,244 INFO [RS:2;jenkins-hbase4:41487] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 18:17:25,244 INFO [RS:1;jenkins-hbase4:37037] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,246 DEBUG [RS:1;jenkins-hbase4:37037] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,246 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11418193440, jitterRate=0.06340213119983673}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-11 18:17:25,246 DEBUG [RS:1;jenkins-hbase4:37037] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,246 INFO [RS:0;jenkins-hbase4:33705] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,246 DEBUG [RS:1;jenkins-hbase4:37037] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,246 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-11 18:17:25,246 DEBUG [RS:0;jenkins-hbase4:33705] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,246 DEBUG [RS:1;jenkins-hbase4:37037] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,246 DEBUG [RS:0;jenkins-hbase4:33705] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,246 DEBUG [RS:1;jenkins-hbase4:37037] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,246 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-11 18:17:25,246 DEBUG [RS:1;jenkins-hbase4:37037] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-11 18:17:25,246 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-11 18:17:25,247 DEBUG [RS:1;jenkins-hbase4:37037] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,246 INFO [RS:2;jenkins-hbase4:41487] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,246 DEBUG [RS:0;jenkins-hbase4:33705] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,247 DEBUG [RS:1;jenkins-hbase4:37037] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,247 DEBUG [RS:2;jenkins-hbase4:41487] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,247 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-11 18:17:25,247 DEBUG [RS:1;jenkins-hbase4:37037] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,247 DEBUG [RS:0;jenkins-hbase4:33705] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,247 DEBUG [RS:1;jenkins-hbase4:37037] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,247 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-11 18:17:25,247 DEBUG [RS:2;jenkins-hbase4:41487] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,248 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-11 18:17:25,248 DEBUG [RS:0;jenkins-hbase4:33705] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,248 DEBUG [RS:2;jenkins-hbase4:41487] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,248 DEBUG [RS:0;jenkins-hbase4:33705] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-11 18:17:25,248 DEBUG [RS:2;jenkins-hbase4:41487] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,248 DEBUG [RS:0;jenkins-hbase4:33705] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,248 DEBUG [RS:2;jenkins-hbase4:41487] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,248 DEBUG [RS:0;jenkins-hbase4:33705] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,248 DEBUG [RS:2;jenkins-hbase4:41487] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-11 18:17:25,248 DEBUG [RS:0;jenkins-hbase4:33705] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,248 DEBUG [RS:2;jenkins-hbase4:41487] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,248 DEBUG [RS:0;jenkins-hbase4:33705] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,248 DEBUG [RS:2;jenkins-hbase4:41487] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,248 DEBUG [RS:2;jenkins-hbase4:41487] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,248 DEBUG [RS:2;jenkins-hbase4:41487] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:25,254 INFO [RS:1;jenkins-hbase4:37037] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,254 INFO [RS:2;jenkins-hbase4:41487] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,254 INFO [RS:1;jenkins-hbase4:37037] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,254 INFO [RS:2;jenkins-hbase4:41487] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,254 INFO [RS:1;jenkins-hbase4:37037] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,255 INFO [RS:0;jenkins-hbase4:33705] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,255 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-11 18:17:25,255 INFO [RS:0;jenkins-hbase4:33705] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,255 INFO [RS:2;jenkins-hbase4:41487] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,255 INFO [RS:0;jenkins-hbase4:33705] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,255 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-11 18:17:25,256 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-11 18:17:25,256 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-11 18:17:25,256 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-11 18:17:25,258 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-11 18:17:25,263 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-11 18:17:25,271 INFO [RS:0;jenkins-hbase4:33705] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 18:17:25,271 INFO [RS:1;jenkins-hbase4:37037] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 18:17:25,271 INFO [RS:0;jenkins-hbase4:33705] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33705,1689099444082-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,271 INFO [RS:1;jenkins-hbase4:37037] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37037,1689099444235-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,274 INFO [RS:2;jenkins-hbase4:41487] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 18:17:25,274 INFO [RS:2;jenkins-hbase4:41487] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41487,1689099444383-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,284 INFO [RS:1;jenkins-hbase4:37037] regionserver.Replication(203): jenkins-hbase4.apache.org,37037,1689099444235 started 2023-07-11 18:17:25,284 INFO [RS:1;jenkins-hbase4:37037] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37037,1689099444235, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37037, sessionid=0x101559aa18f0002 2023-07-11 18:17:25,284 INFO [RS:0;jenkins-hbase4:33705] regionserver.Replication(203): jenkins-hbase4.apache.org,33705,1689099444082 started 2023-07-11 18:17:25,284 INFO [RS:2;jenkins-hbase4:41487] regionserver.Replication(203): jenkins-hbase4.apache.org,41487,1689099444383 started 2023-07-11 18:17:25,284 INFO [RS:0;jenkins-hbase4:33705] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33705,1689099444082, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33705, sessionid=0x101559aa18f0001 2023-07-11 18:17:25,284 DEBUG [RS:1;jenkins-hbase4:37037] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 18:17:25,284 DEBUG [RS:0;jenkins-hbase4:33705] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 18:17:25,284 DEBUG [RS:0;jenkins-hbase4:33705] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33705,1689099444082 2023-07-11 18:17:25,284 DEBUG [RS:1;jenkins-hbase4:37037] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37037,1689099444235 2023-07-11 18:17:25,284 INFO [RS:2;jenkins-hbase4:41487] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41487,1689099444383, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41487, sessionid=0x101559aa18f0003 2023-07-11 18:17:25,284 DEBUG [RS:1;jenkins-hbase4:37037] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37037,1689099444235' 2023-07-11 18:17:25,284 DEBUG [RS:2;jenkins-hbase4:41487] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 18:17:25,284 DEBUG [RS:2;jenkins-hbase4:41487] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:25,284 DEBUG [RS:2;jenkins-hbase4:41487] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41487,1689099444383' 2023-07-11 18:17:25,284 DEBUG [RS:2;jenkins-hbase4:41487] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 18:17:25,284 DEBUG [RS:0;jenkins-hbase4:33705] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33705,1689099444082' 2023-07-11 18:17:25,285 DEBUG [RS:0;jenkins-hbase4:33705] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 18:17:25,284 DEBUG [RS:1;jenkins-hbase4:37037] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 18:17:25,285 DEBUG [RS:0;jenkins-hbase4:33705] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 18:17:25,285 DEBUG [RS:2;jenkins-hbase4:41487] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 18:17:25,285 DEBUG [RS:1;jenkins-hbase4:37037] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 18:17:25,285 DEBUG [RS:0;jenkins-hbase4:33705] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 18:17:25,285 DEBUG [RS:0;jenkins-hbase4:33705] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 18:17:25,285 DEBUG [RS:2;jenkins-hbase4:41487] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 18:17:25,285 DEBUG [RS:0;jenkins-hbase4:33705] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33705,1689099444082 2023-07-11 18:17:25,285 DEBUG [RS:1;jenkins-hbase4:37037] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 18:17:25,286 DEBUG [RS:1;jenkins-hbase4:37037] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 18:17:25,286 DEBUG [RS:1;jenkins-hbase4:37037] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37037,1689099444235 2023-07-11 18:17:25,286 DEBUG [RS:1;jenkins-hbase4:37037] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37037,1689099444235' 2023-07-11 18:17:25,286 DEBUG [RS:1;jenkins-hbase4:37037] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 18:17:25,285 DEBUG [RS:2;jenkins-hbase4:41487] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 18:17:25,286 DEBUG [RS:0;jenkins-hbase4:33705] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33705,1689099444082' 2023-07-11 18:17:25,286 DEBUG [RS:0;jenkins-hbase4:33705] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 18:17:25,286 DEBUG [RS:2;jenkins-hbase4:41487] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:25,286 DEBUG [RS:2;jenkins-hbase4:41487] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41487,1689099444383' 2023-07-11 18:17:25,286 DEBUG [RS:2;jenkins-hbase4:41487] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 18:17:25,286 DEBUG [RS:1;jenkins-hbase4:37037] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 18:17:25,286 DEBUG [RS:0;jenkins-hbase4:33705] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 18:17:25,286 DEBUG [RS:2;jenkins-hbase4:41487] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 18:17:25,286 DEBUG [RS:1;jenkins-hbase4:37037] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 18:17:25,286 INFO [RS:1;jenkins-hbase4:37037] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-11 18:17:25,286 INFO [RS:1;jenkins-hbase4:37037] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-11 18:17:25,286 DEBUG [RS:2;jenkins-hbase4:41487] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 18:17:25,286 DEBUG [RS:0;jenkins-hbase4:33705] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 18:17:25,287 INFO [RS:2;jenkins-hbase4:41487] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-11 18:17:25,287 INFO [RS:2;jenkins-hbase4:41487] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-11 18:17:25,287 INFO [RS:0;jenkins-hbase4:33705] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-11 18:17:25,287 INFO [RS:0;jenkins-hbase4:33705] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-11 18:17:25,388 INFO [RS:0;jenkins-hbase4:33705] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33705%2C1689099444082, suffix=, logDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/WALs/jenkins-hbase4.apache.org,33705,1689099444082, archiveDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/oldWALs, maxLogs=32 2023-07-11 18:17:25,388 INFO [RS:1;jenkins-hbase4:37037] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37037%2C1689099444235, suffix=, logDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/WALs/jenkins-hbase4.apache.org,37037,1689099444235, archiveDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/oldWALs, maxLogs=32 2023-07-11 18:17:25,388 INFO [RS:2;jenkins-hbase4:41487] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41487%2C1689099444383, suffix=, logDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/WALs/jenkins-hbase4.apache.org,41487,1689099444383, archiveDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/oldWALs, maxLogs=32 2023-07-11 18:17:25,406 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43963,DS-b95f6d85-07d1-4bf5-931c-7526c43d5d29,DISK] 2023-07-11 18:17:25,410 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35255,DS-c7742031-d772-4baf-97b2-c6c1d38992ba,DISK] 2023-07-11 18:17:25,410 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42183,DS-4de7cb8d-f313-4791-bb16-a91a9507048d,DISK] 2023-07-11 18:17:25,411 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43963,DS-b95f6d85-07d1-4bf5-931c-7526c43d5d29,DISK] 2023-07-11 18:17:25,412 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35255,DS-c7742031-d772-4baf-97b2-c6c1d38992ba,DISK] 2023-07-11 18:17:25,413 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42183,DS-4de7cb8d-f313-4791-bb16-a91a9507048d,DISK] 2023-07-11 18:17:25,413 DEBUG [jenkins-hbase4:38477] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-11 18:17:25,413 DEBUG [jenkins-hbase4:38477] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:17:25,413 DEBUG [jenkins-hbase4:38477] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:17:25,413 DEBUG [jenkins-hbase4:38477] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:17:25,413 DEBUG [jenkins-hbase4:38477] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:17:25,413 DEBUG [jenkins-hbase4:38477] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:17:25,420 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33705,1689099444082, state=OPENING 2023-07-11 18:17:25,420 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35255,DS-c7742031-d772-4baf-97b2-c6c1d38992ba,DISK] 2023-07-11 18:17:25,421 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42183,DS-4de7cb8d-f313-4791-bb16-a91a9507048d,DISK] 2023-07-11 18:17:25,421 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43963,DS-b95f6d85-07d1-4bf5-931c-7526c43d5d29,DISK] 2023-07-11 18:17:25,421 INFO [RS:1;jenkins-hbase4:37037] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/WALs/jenkins-hbase4.apache.org,37037,1689099444235/jenkins-hbase4.apache.org%2C37037%2C1689099444235.1689099445389 2023-07-11 18:17:25,421 DEBUG [RS:1;jenkins-hbase4:37037] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43963,DS-b95f6d85-07d1-4bf5-931c-7526c43d5d29,DISK], DatanodeInfoWithStorage[127.0.0.1:35255,DS-c7742031-d772-4baf-97b2-c6c1d38992ba,DISK], DatanodeInfoWithStorage[127.0.0.1:42183,DS-4de7cb8d-f313-4791-bb16-a91a9507048d,DISK]] 2023-07-11 18:17:25,421 INFO [RS:2;jenkins-hbase4:41487] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/WALs/jenkins-hbase4.apache.org,41487,1689099444383/jenkins-hbase4.apache.org%2C41487%2C1689099444383.1689099445389 2023-07-11 18:17:25,422 DEBUG [PEWorker-4] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-11 18:17:25,423 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:25,423 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33705,1689099444082}] 2023-07-11 18:17:25,424 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 18:17:25,427 DEBUG [RS:2;jenkins-hbase4:41487] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43963,DS-b95f6d85-07d1-4bf5-931c-7526c43d5d29,DISK], DatanodeInfoWithStorage[127.0.0.1:42183,DS-4de7cb8d-f313-4791-bb16-a91a9507048d,DISK], DatanodeInfoWithStorage[127.0.0.1:35255,DS-c7742031-d772-4baf-97b2-c6c1d38992ba,DISK]] 2023-07-11 18:17:25,428 INFO [RS:0;jenkins-hbase4:33705] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/WALs/jenkins-hbase4.apache.org,33705,1689099444082/jenkins-hbase4.apache.org%2C33705%2C1689099444082.1689099445389 2023-07-11 18:17:25,428 DEBUG [RS:0;jenkins-hbase4:33705] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43963,DS-b95f6d85-07d1-4bf5-931c-7526c43d5d29,DISK], DatanodeInfoWithStorage[127.0.0.1:42183,DS-4de7cb8d-f313-4791-bb16-a91a9507048d,DISK], DatanodeInfoWithStorage[127.0.0.1:35255,DS-c7742031-d772-4baf-97b2-c6c1d38992ba,DISK]] 2023-07-11 18:17:25,579 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33705,1689099444082 2023-07-11 18:17:25,580 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 18:17:25,581 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44794, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 18:17:25,585 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-11 18:17:25,585 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 18:17:25,587 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33705%2C1689099444082.meta, suffix=.meta, logDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/WALs/jenkins-hbase4.apache.org,33705,1689099444082, archiveDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/oldWALs, maxLogs=32 2023-07-11 18:17:25,600 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42183,DS-4de7cb8d-f313-4791-bb16-a91a9507048d,DISK] 2023-07-11 18:17:25,602 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35255,DS-c7742031-d772-4baf-97b2-c6c1d38992ba,DISK] 2023-07-11 18:17:25,602 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43963,DS-b95f6d85-07d1-4bf5-931c-7526c43d5d29,DISK] 2023-07-11 18:17:25,605 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/WALs/jenkins-hbase4.apache.org,33705,1689099444082/jenkins-hbase4.apache.org%2C33705%2C1689099444082.meta.1689099445587.meta 2023-07-11 18:17:25,606 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42183,DS-4de7cb8d-f313-4791-bb16-a91a9507048d,DISK], DatanodeInfoWithStorage[127.0.0.1:35255,DS-c7742031-d772-4baf-97b2-c6c1d38992ba,DISK], DatanodeInfoWithStorage[127.0.0.1:43963,DS-b95f6d85-07d1-4bf5-931c-7526c43d5d29,DISK]] 2023-07-11 18:17:25,606 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:25,606 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-11 18:17:25,606 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-11 18:17:25,606 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-11 18:17:25,607 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-11 18:17:25,607 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:25,607 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-11 18:17:25,607 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-11 18:17:25,608 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-11 18:17:25,609 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/info 2023-07-11 18:17:25,609 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/info 2023-07-11 18:17:25,609 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-11 18:17:25,610 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:25,610 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-11 18:17:25,611 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/rep_barrier 2023-07-11 18:17:25,611 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/rep_barrier 2023-07-11 18:17:25,611 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-11 18:17:25,611 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:25,612 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-11 18:17:25,612 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/table 2023-07-11 18:17:25,612 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/table 2023-07-11 18:17:25,613 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-11 18:17:25,613 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:25,614 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740 2023-07-11 18:17:25,615 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740 2023-07-11 18:17:25,617 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-11 18:17:25,618 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-11 18:17:25,618 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11740116640, jitterRate=0.09338356554508209}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-11 18:17:25,619 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-11 18:17:25,619 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689099445579 2023-07-11 18:17:25,623 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-11 18:17:25,624 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-11 18:17:25,624 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33705,1689099444082, state=OPEN 2023-07-11 18:17:25,625 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-11 18:17:25,626 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-11 18:17:25,627 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-11 18:17:25,627 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33705,1689099444082 in 203 msec 2023-07-11 18:17:25,628 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-11 18:17:25,629 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 371 msec 2023-07-11 18:17:25,630 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 534 msec 2023-07-11 18:17:25,630 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689099445630, completionTime=-1 2023-07-11 18:17:25,630 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-11 18:17:25,630 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-11 18:17:25,634 DEBUG [hconnection-0x282e3da3-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 18:17:25,636 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44800, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 18:17:25,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-11 18:17:25,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689099505638 2023-07-11 18:17:25,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689099565638 2023-07-11 18:17:25,638 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 7 msec 2023-07-11 18:17:25,643 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38477,1689099443878-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,643 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38477,1689099443878-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,643 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38477,1689099443878-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,643 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:38477, period=300000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,643 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:25,643 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-11 18:17:25,644 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-11 18:17:25,644 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-11 18:17:25,647 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 18:17:25,647 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-11 18:17:25,648 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 18:17:25,649 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp/data/hbase/namespace/0b656c866d17506671369550ab5ca4ba 2023-07-11 18:17:25,650 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp/data/hbase/namespace/0b656c866d17506671369550ab5ca4ba empty. 2023-07-11 18:17:25,650 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp/data/hbase/namespace/0b656c866d17506671369550ab5ca4ba 2023-07-11 18:17:25,650 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-11 18:17:25,671 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-11 18:17:25,672 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 0b656c866d17506671369550ab5ca4ba, NAME => 'hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp 2023-07-11 18:17:25,683 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:25,683 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 0b656c866d17506671369550ab5ca4ba, disabling compactions & flushes 2023-07-11 18:17:25,683 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba. 2023-07-11 18:17:25,683 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba. 2023-07-11 18:17:25,683 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba. after waiting 0 ms 2023-07-11 18:17:25,683 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba. 2023-07-11 18:17:25,684 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba. 2023-07-11 18:17:25,684 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 0b656c866d17506671369550ab5ca4ba: 2023-07-11 18:17:25,686 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 18:17:25,687 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689099445687"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099445687"}]},"ts":"1689099445687"} 2023-07-11 18:17:25,689 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 18:17:25,690 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 18:17:25,690 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099445690"}]},"ts":"1689099445690"} 2023-07-11 18:17:25,691 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-11 18:17:25,694 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:17:25,695 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:17:25,695 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:17:25,695 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:17:25,695 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:17:25,695 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=0b656c866d17506671369550ab5ca4ba, ASSIGN}] 2023-07-11 18:17:25,697 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=0b656c866d17506671369550ab5ca4ba, ASSIGN 2023-07-11 18:17:25,698 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=0b656c866d17506671369550ab5ca4ba, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37037,1689099444235; forceNewPlan=false, retain=false 2023-07-11 18:17:25,706 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38477,1689099443878] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 18:17:25,707 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38477,1689099443878] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-11 18:17:25,709 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 18:17:25,710 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 18:17:25,711 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp/data/hbase/rsgroup/e31fea5d4bfe5b3b2aebd24b6c92d752 2023-07-11 18:17:25,712 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp/data/hbase/rsgroup/e31fea5d4bfe5b3b2aebd24b6c92d752 empty. 2023-07-11 18:17:25,712 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp/data/hbase/rsgroup/e31fea5d4bfe5b3b2aebd24b6c92d752 2023-07-11 18:17:25,712 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-11 18:17:25,725 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-11 18:17:25,727 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => e31fea5d4bfe5b3b2aebd24b6c92d752, NAME => 'hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp 2023-07-11 18:17:25,737 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:25,738 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing e31fea5d4bfe5b3b2aebd24b6c92d752, disabling compactions & flushes 2023-07-11 18:17:25,738 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752. 2023-07-11 18:17:25,738 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752. 2023-07-11 18:17:25,738 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752. after waiting 0 ms 2023-07-11 18:17:25,738 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752. 2023-07-11 18:17:25,738 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752. 2023-07-11 18:17:25,738 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for e31fea5d4bfe5b3b2aebd24b6c92d752: 2023-07-11 18:17:25,740 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 18:17:25,741 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689099445741"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099445741"}]},"ts":"1689099445741"} 2023-07-11 18:17:25,742 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 18:17:25,743 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 18:17:25,743 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099445743"}]},"ts":"1689099445743"} 2023-07-11 18:17:25,744 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-11 18:17:25,747 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:17:25,747 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:17:25,747 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:17:25,747 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:17:25,747 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:17:25,748 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e31fea5d4bfe5b3b2aebd24b6c92d752, ASSIGN}] 2023-07-11 18:17:25,748 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=e31fea5d4bfe5b3b2aebd24b6c92d752, ASSIGN 2023-07-11 18:17:25,749 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=e31fea5d4bfe5b3b2aebd24b6c92d752, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33705,1689099444082; forceNewPlan=false, retain=false 2023-07-11 18:17:25,749 INFO [jenkins-hbase4:38477] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-11 18:17:25,751 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=0b656c866d17506671369550ab5ca4ba, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37037,1689099444235 2023-07-11 18:17:25,752 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=e31fea5d4bfe5b3b2aebd24b6c92d752, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33705,1689099444082 2023-07-11 18:17:25,752 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689099445751"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099445751"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099445751"}]},"ts":"1689099445751"} 2023-07-11 18:17:25,752 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689099445752"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099445752"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099445752"}]},"ts":"1689099445752"} 2023-07-11 18:17:25,753 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 0b656c866d17506671369550ab5ca4ba, server=jenkins-hbase4.apache.org,37037,1689099444235}] 2023-07-11 18:17:25,754 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure e31fea5d4bfe5b3b2aebd24b6c92d752, server=jenkins-hbase4.apache.org,33705,1689099444082}] 2023-07-11 18:17:25,909 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37037,1689099444235 2023-07-11 18:17:25,910 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 18:17:25,912 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34918, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 18:17:25,914 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752. 2023-07-11 18:17:25,914 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e31fea5d4bfe5b3b2aebd24b6c92d752, NAME => 'hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:25,914 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-11 18:17:25,915 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752. service=MultiRowMutationService 2023-07-11 18:17:25,915 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-11 18:17:25,915 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup e31fea5d4bfe5b3b2aebd24b6c92d752 2023-07-11 18:17:25,915 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:25,915 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e31fea5d4bfe5b3b2aebd24b6c92d752 2023-07-11 18:17:25,915 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e31fea5d4bfe5b3b2aebd24b6c92d752 2023-07-11 18:17:25,917 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba. 2023-07-11 18:17:25,917 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0b656c866d17506671369550ab5ca4ba, NAME => 'hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:25,917 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 0b656c866d17506671369550ab5ca4ba 2023-07-11 18:17:25,917 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:25,917 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0b656c866d17506671369550ab5ca4ba 2023-07-11 18:17:25,917 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0b656c866d17506671369550ab5ca4ba 2023-07-11 18:17:25,917 INFO [StoreOpener-e31fea5d4bfe5b3b2aebd24b6c92d752-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region e31fea5d4bfe5b3b2aebd24b6c92d752 2023-07-11 18:17:25,919 INFO [StoreOpener-0b656c866d17506671369550ab5ca4ba-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 0b656c866d17506671369550ab5ca4ba 2023-07-11 18:17:25,919 DEBUG [StoreOpener-e31fea5d4bfe5b3b2aebd24b6c92d752-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/rsgroup/e31fea5d4bfe5b3b2aebd24b6c92d752/m 2023-07-11 18:17:25,919 DEBUG [StoreOpener-e31fea5d4bfe5b3b2aebd24b6c92d752-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/rsgroup/e31fea5d4bfe5b3b2aebd24b6c92d752/m 2023-07-11 18:17:25,920 INFO [StoreOpener-e31fea5d4bfe5b3b2aebd24b6c92d752-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e31fea5d4bfe5b3b2aebd24b6c92d752 columnFamilyName m 2023-07-11 18:17:25,920 INFO [StoreOpener-e31fea5d4bfe5b3b2aebd24b6c92d752-1] regionserver.HStore(310): Store=e31fea5d4bfe5b3b2aebd24b6c92d752/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:25,920 DEBUG [StoreOpener-0b656c866d17506671369550ab5ca4ba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/namespace/0b656c866d17506671369550ab5ca4ba/info 2023-07-11 18:17:25,920 DEBUG [StoreOpener-0b656c866d17506671369550ab5ca4ba-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/namespace/0b656c866d17506671369550ab5ca4ba/info 2023-07-11 18:17:25,921 INFO [StoreOpener-0b656c866d17506671369550ab5ca4ba-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0b656c866d17506671369550ab5ca4ba columnFamilyName info 2023-07-11 18:17:25,921 INFO [StoreOpener-0b656c866d17506671369550ab5ca4ba-1] regionserver.HStore(310): Store=0b656c866d17506671369550ab5ca4ba/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:25,921 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/rsgroup/e31fea5d4bfe5b3b2aebd24b6c92d752 2023-07-11 18:17:25,922 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/rsgroup/e31fea5d4bfe5b3b2aebd24b6c92d752 2023-07-11 18:17:25,922 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/namespace/0b656c866d17506671369550ab5ca4ba 2023-07-11 18:17:25,923 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/namespace/0b656c866d17506671369550ab5ca4ba 2023-07-11 18:17:25,924 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e31fea5d4bfe5b3b2aebd24b6c92d752 2023-07-11 18:17:25,926 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0b656c866d17506671369550ab5ca4ba 2023-07-11 18:17:25,931 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/namespace/0b656c866d17506671369550ab5ca4ba/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:25,931 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/rsgroup/e31fea5d4bfe5b3b2aebd24b6c92d752/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:25,932 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e31fea5d4bfe5b3b2aebd24b6c92d752; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@4e1c6034, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:25,932 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0b656c866d17506671369550ab5ca4ba; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10692788320, jitterRate=-0.004156485199928284}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:25,932 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e31fea5d4bfe5b3b2aebd24b6c92d752: 2023-07-11 18:17:25,932 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0b656c866d17506671369550ab5ca4ba: 2023-07-11 18:17:25,935 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba., pid=8, masterSystemTime=1689099445909 2023-07-11 18:17:25,935 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752., pid=9, masterSystemTime=1689099445909 2023-07-11 18:17:25,939 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba. 2023-07-11 18:17:25,940 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba. 2023-07-11 18:17:25,942 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=0b656c866d17506671369550ab5ca4ba, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37037,1689099444235 2023-07-11 18:17:25,942 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689099445941"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099445941"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099445941"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099445941"}]},"ts":"1689099445941"} 2023-07-11 18:17:25,946 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752. 2023-07-11 18:17:25,946 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752. 2023-07-11 18:17:25,947 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=e31fea5d4bfe5b3b2aebd24b6c92d752, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33705,1689099444082 2023-07-11 18:17:25,947 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689099445947"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099445947"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099445947"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099445947"}]},"ts":"1689099445947"} 2023-07-11 18:17:25,948 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-11 18:17:25,948 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 0b656c866d17506671369550ab5ca4ba, server=jenkins-hbase4.apache.org,37037,1689099444235 in 193 msec 2023-07-11 18:17:25,950 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-11 18:17:25,950 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=0b656c866d17506671369550ab5ca4ba, ASSIGN in 253 msec 2023-07-11 18:17:25,951 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-11 18:17:25,951 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 18:17:25,951 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure e31fea5d4bfe5b3b2aebd24b6c92d752, server=jenkins-hbase4.apache.org,33705,1689099444082 in 195 msec 2023-07-11 18:17:25,951 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099445951"}]},"ts":"1689099445951"} 2023-07-11 18:17:25,952 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-11 18:17:25,953 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-11 18:17:25,953 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=e31fea5d4bfe5b3b2aebd24b6c92d752, ASSIGN in 203 msec 2023-07-11 18:17:25,954 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 18:17:25,954 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099445954"}]},"ts":"1689099445954"} 2023-07-11 18:17:25,955 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-11 18:17:25,955 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 18:17:25,957 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 312 msec 2023-07-11 18:17:25,958 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 18:17:25,959 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 252 msec 2023-07-11 18:17:26,013 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38477,1689099443878] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-11 18:17:26,013 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38477,1689099443878] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-11 18:17:26,022 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:26,022 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38477,1689099443878] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:26,023 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38477,1689099443878] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-11 18:17:26,024 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,38477,1689099443878] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-11 18:17:26,045 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-11 18:17:26,047 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-11 18:17:26,048 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:26,050 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 18:17:26,052 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34932, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 18:17:26,054 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-11 18:17:26,063 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 18:17:26,065 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 11 msec 2023-07-11 18:17:26,076 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-11 18:17:26,083 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 18:17:26,085 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-07-11 18:17:26,095 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-11 18:17:26,097 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-11 18:17:26,098 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.555sec 2023-07-11 18:17:26,098 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-11 18:17:26,098 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-11 18:17:26,098 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-11 18:17:26,098 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38477,1689099443878-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-11 18:17:26,098 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38477,1689099443878-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-11 18:17:26,099 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-11 18:17:26,156 DEBUG [Listener at localhost/41775] zookeeper.ReadOnlyZKClient(139): Connect 0x0c6b1b47 to 127.0.0.1:50731 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:17:26,162 DEBUG [Listener at localhost/41775] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5b2bdf38, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:17:26,164 DEBUG [hconnection-0xa29c91c-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 18:17:26,166 INFO [RS-EventLoopGroup-13-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44808, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 18:17:26,167 INFO [Listener at localhost/41775] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,38477,1689099443878 2023-07-11 18:17:26,167 INFO [Listener at localhost/41775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:26,169 DEBUG [Listener at localhost/41775] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-11 18:17:26,171 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44334, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-11 18:17:26,174 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-11 18:17:26,174 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:26,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-11 18:17:26,175 DEBUG [Listener at localhost/41775] zookeeper.ReadOnlyZKClient(139): Connect 0x66e9ca67 to 127.0.0.1:50731 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:17:26,182 DEBUG [Listener at localhost/41775] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@22cc8e8f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:17:26,182 INFO [Listener at localhost/41775] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:50731 2023-07-11 18:17:26,186 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 18:17:26,187 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101559aa18f000a connected 2023-07-11 18:17:26,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:26,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:26,193 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-11 18:17:26,204 INFO [Listener at localhost/41775] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-11 18:17:26,205 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:26,205 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:26,205 INFO [Listener at localhost/41775] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-11 18:17:26,205 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-11 18:17:26,205 INFO [Listener at localhost/41775] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-11 18:17:26,205 INFO [Listener at localhost/41775] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-11 18:17:26,206 INFO [Listener at localhost/41775] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36281 2023-07-11 18:17:26,206 INFO [Listener at localhost/41775] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-11 18:17:26,207 DEBUG [Listener at localhost/41775] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-11 18:17:26,208 INFO [Listener at localhost/41775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:26,209 INFO [Listener at localhost/41775] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-11 18:17:26,210 INFO [Listener at localhost/41775] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36281 connecting to ZooKeeper ensemble=127.0.0.1:50731 2023-07-11 18:17:26,213 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:362810x0, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-11 18:17:26,215 DEBUG [Listener at localhost/41775] zookeeper.ZKUtil(162): regionserver:362810x0, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-11 18:17:26,215 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36281-0x101559aa18f000b connected 2023-07-11 18:17:26,216 DEBUG [Listener at localhost/41775] zookeeper.ZKUtil(162): regionserver:36281-0x101559aa18f000b, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-11 18:17:26,216 DEBUG [Listener at localhost/41775] zookeeper.ZKUtil(164): regionserver:36281-0x101559aa18f000b, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-11 18:17:26,217 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36281 2023-07-11 18:17:26,217 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36281 2023-07-11 18:17:26,218 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36281 2023-07-11 18:17:26,222 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36281 2023-07-11 18:17:26,223 DEBUG [Listener at localhost/41775] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36281 2023-07-11 18:17:26,224 INFO [Listener at localhost/41775] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-11 18:17:26,224 INFO [Listener at localhost/41775] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-11 18:17:26,225 INFO [Listener at localhost/41775] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-11 18:17:26,225 INFO [Listener at localhost/41775] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-11 18:17:26,225 INFO [Listener at localhost/41775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-11 18:17:26,225 INFO [Listener at localhost/41775] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-11 18:17:26,225 INFO [Listener at localhost/41775] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-11 18:17:26,226 INFO [Listener at localhost/41775] http.HttpServer(1146): Jetty bound to port 35213 2023-07-11 18:17:26,226 INFO [Listener at localhost/41775] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-11 18:17:26,227 INFO [Listener at localhost/41775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:26,227 INFO [Listener at localhost/41775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@17ccdbfb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/hadoop.log.dir/,AVAILABLE} 2023-07-11 18:17:26,227 INFO [Listener at localhost/41775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:26,227 INFO [Listener at localhost/41775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@e1fbe15{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-11 18:17:26,340 INFO [Listener at localhost/41775] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-11 18:17:26,340 INFO [Listener at localhost/41775] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-11 18:17:26,341 INFO [Listener at localhost/41775] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-11 18:17:26,341 INFO [Listener at localhost/41775] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-11 18:17:26,341 INFO [Listener at localhost/41775] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-11 18:17:26,342 INFO [Listener at localhost/41775] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2c9d9022{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/java.io.tmpdir/jetty-0_0_0_0-35213-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4512234459639207303/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:17:26,344 INFO [Listener at localhost/41775] server.AbstractConnector(333): Started ServerConnector@67ada82a{HTTP/1.1, (http/1.1)}{0.0.0.0:35213} 2023-07-11 18:17:26,344 INFO [Listener at localhost/41775] server.Server(415): Started @46694ms 2023-07-11 18:17:26,347 INFO [RS:3;jenkins-hbase4:36281] regionserver.HRegionServer(951): ClusterId : 0a322711-7f42-487f-aa63-235ae6645494 2023-07-11 18:17:26,347 DEBUG [RS:3;jenkins-hbase4:36281] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-11 18:17:26,349 DEBUG [RS:3;jenkins-hbase4:36281] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-11 18:17:26,349 DEBUG [RS:3;jenkins-hbase4:36281] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-11 18:17:26,351 DEBUG [RS:3;jenkins-hbase4:36281] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-11 18:17:26,352 DEBUG [RS:3;jenkins-hbase4:36281] zookeeper.ReadOnlyZKClient(139): Connect 0x664981a1 to 127.0.0.1:50731 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-11 18:17:26,358 DEBUG [RS:3;jenkins-hbase4:36281] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@688e7318, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-11 18:17:26,358 DEBUG [RS:3;jenkins-hbase4:36281] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6cb1ead1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-11 18:17:26,366 DEBUG [RS:3;jenkins-hbase4:36281] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:36281 2023-07-11 18:17:26,367 INFO [RS:3;jenkins-hbase4:36281] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-11 18:17:26,367 INFO [RS:3;jenkins-hbase4:36281] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-11 18:17:26,367 DEBUG [RS:3;jenkins-hbase4:36281] regionserver.HRegionServer(1022): About to register with Master. 2023-07-11 18:17:26,367 INFO [RS:3;jenkins-hbase4:36281] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,38477,1689099443878 with isa=jenkins-hbase4.apache.org/172.31.14.131:36281, startcode=1689099446204 2023-07-11 18:17:26,367 DEBUG [RS:3;jenkins-hbase4:36281] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-11 18:17:26,370 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37737, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-11 18:17:26,370 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38477] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36281,1689099446204 2023-07-11 18:17:26,370 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38477,1689099443878] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-11 18:17:26,371 DEBUG [RS:3;jenkins-hbase4:36281] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845 2023-07-11 18:17:26,371 DEBUG [RS:3;jenkins-hbase4:36281] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43601 2023-07-11 18:17:26,371 DEBUG [RS:3;jenkins-hbase4:36281] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=46721 2023-07-11 18:17:26,375 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:26,375 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:26,375 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:26,375 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:26,375 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38477,1689099443878] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:26,376 DEBUG [RS:3;jenkins-hbase4:36281] zookeeper.ZKUtil(162): regionserver:36281-0x101559aa18f000b, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36281,1689099446204 2023-07-11 18:17:26,376 WARN [RS:3;jenkins-hbase4:36281] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-11 18:17:26,376 INFO [RS:3;jenkins-hbase4:36281] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-11 18:17:26,376 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36281,1689099446204] 2023-07-11 18:17:26,376 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36281,1689099446204 2023-07-11 18:17:26,376 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38477,1689099443878] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-11 18:17:26,376 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36281,1689099446204 2023-07-11 18:17:26,376 DEBUG [RS:3;jenkins-hbase4:36281] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/WALs/jenkins-hbase4.apache.org,36281,1689099446204 2023-07-11 18:17:26,376 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37037,1689099444235 2023-07-11 18:17:26,376 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36281,1689099446204 2023-07-11 18:17:26,378 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38477,1689099443878] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-11 18:17:26,382 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37037,1689099444235 2023-07-11 18:17:26,382 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37037,1689099444235 2023-07-11 18:17:26,378 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:26,382 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:26,382 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:26,383 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33705,1689099444082 2023-07-11 18:17:26,387 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33705,1689099444082 2023-07-11 18:17:26,387 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33705,1689099444082 2023-07-11 18:17:26,387 DEBUG [RS:3;jenkins-hbase4:36281] zookeeper.ZKUtil(162): regionserver:36281-0x101559aa18f000b, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36281,1689099446204 2023-07-11 18:17:26,388 DEBUG [RS:3;jenkins-hbase4:36281] zookeeper.ZKUtil(162): regionserver:36281-0x101559aa18f000b, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37037,1689099444235 2023-07-11 18:17:26,388 DEBUG [RS:3;jenkins-hbase4:36281] zookeeper.ZKUtil(162): regionserver:36281-0x101559aa18f000b, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:26,388 DEBUG [RS:3;jenkins-hbase4:36281] zookeeper.ZKUtil(162): regionserver:36281-0x101559aa18f000b, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33705,1689099444082 2023-07-11 18:17:26,389 DEBUG [RS:3;jenkins-hbase4:36281] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-11 18:17:26,389 INFO [RS:3;jenkins-hbase4:36281] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-11 18:17:26,390 INFO [RS:3;jenkins-hbase4:36281] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-11 18:17:26,390 INFO [RS:3;jenkins-hbase4:36281] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-11 18:17:26,391 INFO [RS:3;jenkins-hbase4:36281] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:26,394 INFO [RS:3;jenkins-hbase4:36281] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-11 18:17:26,396 INFO [RS:3;jenkins-hbase4:36281] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:26,397 DEBUG [RS:3;jenkins-hbase4:36281] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:26,397 DEBUG [RS:3;jenkins-hbase4:36281] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:26,397 DEBUG [RS:3;jenkins-hbase4:36281] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:26,397 DEBUG [RS:3;jenkins-hbase4:36281] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:26,397 DEBUG [RS:3;jenkins-hbase4:36281] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:26,397 DEBUG [RS:3;jenkins-hbase4:36281] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-11 18:17:26,397 DEBUG [RS:3;jenkins-hbase4:36281] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:26,397 DEBUG [RS:3;jenkins-hbase4:36281] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:26,397 DEBUG [RS:3;jenkins-hbase4:36281] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:26,397 DEBUG [RS:3;jenkins-hbase4:36281] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-11 18:17:26,399 INFO [RS:3;jenkins-hbase4:36281] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:26,399 INFO [RS:3;jenkins-hbase4:36281] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:26,399 INFO [RS:3;jenkins-hbase4:36281] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:26,416 INFO [RS:3;jenkins-hbase4:36281] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-11 18:17:26,416 INFO [RS:3;jenkins-hbase4:36281] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36281,1689099446204-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-11 18:17:26,426 INFO [RS:3;jenkins-hbase4:36281] regionserver.Replication(203): jenkins-hbase4.apache.org,36281,1689099446204 started 2023-07-11 18:17:26,426 INFO [RS:3;jenkins-hbase4:36281] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36281,1689099446204, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36281, sessionid=0x101559aa18f000b 2023-07-11 18:17:26,426 DEBUG [RS:3;jenkins-hbase4:36281] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-11 18:17:26,427 DEBUG [RS:3;jenkins-hbase4:36281] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36281,1689099446204 2023-07-11 18:17:26,427 DEBUG [RS:3;jenkins-hbase4:36281] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36281,1689099446204' 2023-07-11 18:17:26,427 DEBUG [RS:3;jenkins-hbase4:36281] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-11 18:17:26,427 DEBUG [RS:3;jenkins-hbase4:36281] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-11 18:17:26,427 DEBUG [RS:3;jenkins-hbase4:36281] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-11 18:17:26,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:26,427 DEBUG [RS:3;jenkins-hbase4:36281] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-11 18:17:26,428 DEBUG [RS:3;jenkins-hbase4:36281] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36281,1689099446204 2023-07-11 18:17:26,428 DEBUG [RS:3;jenkins-hbase4:36281] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36281,1689099446204' 2023-07-11 18:17:26,428 DEBUG [RS:3;jenkins-hbase4:36281] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-11 18:17:26,428 DEBUG [RS:3;jenkins-hbase4:36281] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-11 18:17:26,428 DEBUG [RS:3;jenkins-hbase4:36281] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-11 18:17:26,428 INFO [RS:3;jenkins-hbase4:36281] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-11 18:17:26,428 INFO [RS:3;jenkins-hbase4:36281] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-11 18:17:26,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:26,430 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:26,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:26,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:26,441 DEBUG [hconnection-0x4981e3b1-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-11 18:17:26,442 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44810, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-11 18:17:26,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:26,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:26,450 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38477] to rsgroup master 2023-07-11 18:17:26,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:26,450 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:44334 deadline: 1689100646450, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. 2023-07-11 18:17:26,451 WARN [Listener at localhost/41775] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:26,452 INFO [Listener at localhost/41775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:26,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:26,452 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:26,453 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33705, jenkins-hbase4.apache.org:36281, jenkins-hbase4.apache.org:37037, jenkins-hbase4.apache.org:41487], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:26,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:26,453 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:26,510 INFO [Listener at localhost/41775] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=561 (was 524) Potentially hanging thread: qtp567469655-2354 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@43a47f7f java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38477 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1630252642-2634-acceptor-0@6fd1c6d6-ServerConnector@67ada82a{HTTP/1.1, (http/1.1)}{0.0.0.0:35213} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 46453 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1290189580-172.31.14.131-1689099443058:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-217497064_17 at /127.0.0.1:50408 [Receiving block BP-1290189580-172.31.14.131-1689099443058:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41775-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x282e3da3-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1272058868@qtp-1352138693-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1660374248-2270 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1660374248-2266 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=36281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=36281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=36281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-252669777_17 at /127.0.0.1:50352 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1876049819@qtp-1352138693-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33419 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp1992898705-2329 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x282e3da3-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp620638926-2370 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:37037-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689099445124 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: CacheReplicationMonitor(842552922) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: qtp620638926-2365 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41775-SendThread(127.0.0.1:50731) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 4 on default port 45957 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38477 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37037 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-1290189580-172.31.14.131-1689099443058 heartbeating to localhost/127.0.0.1:43601 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@52f44e2f java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@627813ae java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp448677092-2297 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37037 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 46453 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x3b5fec2a-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41487 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41487 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@66734993[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845-prefix:jenkins-hbase4.apache.org,41487,1689099444383 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Client (480839201) connection to localhost/127.0.0.1:43601 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 3 on default port 41775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: hconnection-0x4981e3b1-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_23717496_17 at /127.0.0.1:57826 [Receiving block BP-1290189580-172.31.14.131-1689099443058:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-217497064_17 at /127.0.0.1:51452 [Receiving block BP-1290189580-172.31.14.131-1689099443058:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-551-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1630252642-2633 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1660374248-2269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41775-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp448677092-2300 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp567469655-2356 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x05f70b4e-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=33705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845-prefix:jenkins-hbase4.apache.org,37037,1689099444235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: PacketResponder: BP-1290189580-172.31.14.131-1689099443058:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41775.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ProcessThread(sid:0 cport:50731): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@27bb979d[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-17e40f1d-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41487 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 45957 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/41775 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1992898705-2325-acceptor-0@55d1a70a-ServerConnector@35af6de7{HTTP/1.1, (http/1.1)}{0.0.0.0:35253} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=36281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1630252642-2636 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x66e9ca67-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS:0;jenkins-hbase4:33705-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41487 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 45957 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1390799853@qtp-1517135865-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@4f269b40 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45067-SendThread(127.0.0.1:51347) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: IPC Client (480839201) connection to localhost/127.0.0.1:43601 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/41775-SendThread(127.0.0.1:50731) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x664981a1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/878617475.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@86c7fe8 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (480839201) connection to localhost/127.0.0.1:43601 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689099445116 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: IPC Server handler 1 on default port 41775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x664981a1-SendThread(127.0.0.1:50731) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@7a514d93 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:36281Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp448677092-2298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-571-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41775-SendThread(127.0.0.1:50731) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=33705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_182602797_17 at /127.0.0.1:57840 [Receiving block BP-1290189580-172.31.14.131-1689099443058:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x2f6c26b1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/878617475.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-252669777_17 at /127.0.0.1:57788 [Receiving block BP-1290189580-172.31.14.131-1689099443058:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x282e3da3-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x282e3da3-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38477 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp620638926-2368 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 46453 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 1 on default port 43601 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x05f70b4e sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/878617475.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38477 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37037 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:0;jenkins-hbase4:33705 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:38477 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: qtp620638926-2366 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:50731 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: BP-1290189580-172.31.14.131-1689099443058 heartbeating to localhost/127.0.0.1:43601 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1730216535@qtp-1517135865-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37609 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: PacketResponder: BP-1290189580-172.31.14.131-1689099443058:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-3bd0ac7-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x282e3da3-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x3b5fec2a sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/878617475.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x488cee0b-SendThread(127.0.0.1:50731) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x282e3da3-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38477 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:43601 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x2f6c26b1-SendThread(127.0.0.1:50731) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 4 on default port 46453 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41487 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-27862e2e-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp1630252642-2639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1290189580-172.31.14.131-1689099443058:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:41487 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 43601 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x488cee0b-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/dfs/data/data6/current/BP-1290189580-172.31.14.131-1689099443058 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@65bb1752 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:33705Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:37037Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@9b3bb8 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-252669777_17 at /127.0.0.1:50378 [Receiving block BP-1290189580-172.31.14.131-1689099443058:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp620638926-2371 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41775-SendThread(127.0.0.1:50731) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1660374248-2264-acceptor-0@2913f490-ServerConnector@1f0d357d{HTTP/1.1, (http/1.1)}{0.0.0.0:46721} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (480839201) connection to localhost/127.0.0.1:33083 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: hconnection-0x282e3da3-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:36281 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41775-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845-prefix:jenkins-hbase4.apache.org,33705,1689099444082.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1290189580-172.31.14.131-1689099443058:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 41775 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41487 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (480839201) connection to localhost/127.0.0.1:43601 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37037 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 43601 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38477 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:2;jenkins-hbase4:41487-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 41775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51347@0x1796f8f0-SendThread(127.0.0.1:51347) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: IPC Server handler 3 on default port 46453 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_182602797_17 at /127.0.0.1:51410 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41487 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_182602797_17 at /127.0.0.1:57834 [Receiving block BP-1290189580-172.31.14.131-1689099443058:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:43601 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41775.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: PacketResponder: BP-1290189580-172.31.14.131-1689099443058:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp448677092-2296 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 43601 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: M:0;jenkins-hbase4:38477 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41775-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x3b5fec2a-SendThread(127.0.0.1:50731) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp567469655-2361 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 45957 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (480839201) connection to localhost/127.0.0.1:33083 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/dfs/data/data2/current/BP-1290189580-172.31.14.131-1689099443058 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41775.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RS:3;jenkins-hbase4:36281-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1992898705-2324 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1290189580-172.31.14.131-1689099443058:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1992898705-2328 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41487 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0xa29c91c-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp448677092-2294 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/dfs/data/data4/current/BP-1290189580-172.31.14.131-1689099443058 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 41775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:33083 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51347@0x1796f8f0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/878617475.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1890377436@qtp-667667264-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41161 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: Listener at localhost/41775-SendThread(127.0.0.1:50731) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: LeaseRenewer:jenkins@localhost:33083 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=36281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp620638926-2367 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=36281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1992898705-2331 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1290189580-172.31.14.131-1689099443058:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/MasterData-prefix:jenkins-hbase4.apache.org,38477,1689099443878 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp620638926-2372 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1290189580-172.31.14.131-1689099443058:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: qtp1630252642-2635 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41487 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x0c6b1b47 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/878617475.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (480839201) connection to localhost/127.0.0.1:33083 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp567469655-2357 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1290189580-172.31.14.131-1689099443058:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37037 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38477 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 45957 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: 722774454@qtp-274600926-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33267 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: qtp567469655-2360 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 43601 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1992898705-2330 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3730e051 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1630252642-2640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp567469655-2358 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_23717496_17 at /127.0.0.1:50416 [Receiving block BP-1290189580-172.31.14.131-1689099443058:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/45067-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:43601 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_182602797_17 at /127.0.0.1:50428 [Receiving block BP-1290189580-172.31.14.131-1689099443058:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@3f69d495 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=36281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-217497064_17 at /127.0.0.1:57892 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (480839201) connection to localhost/127.0.0.1:43601 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:33083 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1542399295@qtp-667667264-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-558-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-217497064_17 at /127.0.0.1:57814 [Receiving block BP-1290189580-172.31.14.131-1689099443058:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=33705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:41487Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41775-SendThread(127.0.0.1:50731) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1290189580-172.31.14.131-1689099443058:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/dfs/data/data5/current/BP-1290189580-172.31.14.131-1689099443058 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 46453 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=36281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37037 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41775-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/dfs/data/data3/current/BP-1290189580-172.31.14.131-1689099443058 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1660374248-2268 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51347@0x1796f8f0-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1290189580-172.31.14.131-1689099443058:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37037 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37037 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x664981a1-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp620638926-2369-acceptor-0@4e8a5f84-ServerConnector@2a39c821{HTTP/1.1, (http/1.1)}{0.0.0.0:40771} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,38477,1689099443878 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-252669777_17 at /127.0.0.1:51430 [Receiving block BP-1290189580-172.31.14.131-1689099443058:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x05f70b4e-SendThread(127.0.0.1:50731) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 0 on default port 41775 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40609,1689099437960 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: RS:1;jenkins-hbase4:37037 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=36281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 43601 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x4981e3b1-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (480839201) connection to localhost/127.0.0.1:43601 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp567469655-2355-acceptor-0@6b99e8dc-ServerConnector@3ed2d6d6{HTTP/1.1, (http/1.1)}{0.0.0.0:37915} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1630252642-2638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:43601 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37037 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1290189580-172.31.14.131-1689099443058:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (480839201) connection to localhost/127.0.0.1:33083 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:33083 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@49c65185[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x0c6b1b47-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1992898705-2327 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1660374248-2267 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1660374248-2265 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp448677092-2299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-11478867-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845-prefix:jenkins-hbase4.apache.org,33705,1689099444082 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-2-worker-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=33705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x0c6b1b47-SendThread(127.0.0.1:50731) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x66e9ca67 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/878617475.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1660374248-2263 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/741838246.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38477 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=33705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_23717496_17 at /127.0.0.1:51462 [Receiving block BP-1290189580-172.31.14.131-1689099443058:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp448677092-2295-acceptor-0@1a39c370-ServerConnector@2034b22a{HTTP/1.1, (http/1.1)}{0.0.0.0:36451} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_182602797_17 at /127.0.0.1:51488 [Receiving block BP-1290189580-172.31.14.131-1689099443058:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1290189580-172.31.14.131-1689099443058:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@2dc417e4 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_182602797_17 at /127.0.0.1:51474 [Receiving block BP-1290189580-172.31.14.131-1689099443058:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-524b5e18-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp567469655-2359 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp448677092-2301 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33705 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1630252642-2637 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@2e637782 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1290189580-172.31.14.131-1689099443058:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41775.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@59092c2a java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x282e3da3-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/41775-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/dfs/data/data1/current/BP-1290189580-172.31.14.131-1689099443058 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x2f6c26b1-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: BP-1290189580-172.31.14.131-1689099443058 heartbeating to localhost/127.0.0.1:43601 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: 1765079784@qtp-274600926-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Client (480839201) connection to localhost/127.0.0.1:33083 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x488cee0b sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/878617475.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-567-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 45957 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1992898705-2326 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38477 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_182602797_17 at /127.0.0.1:50434 [Receiving block BP-1290189580-172.31.14.131-1689099443058:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:50731@0x66e9ca67-SendThread(127.0.0.1:50731) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41487 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37037 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=36281 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=827 (was 820) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=546 (was 563), ProcessCount=170 (was 170), AvailableMemoryMB=4412 (was 4305) - AvailableMemoryMB LEAK? - 2023-07-11 18:17:26,514 WARN [Listener at localhost/41775] hbase.ResourceChecker(130): Thread=561 is superior to 500 2023-07-11 18:17:26,530 INFO [RS:3;jenkins-hbase4:36281] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36281%2C1689099446204, suffix=, logDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/WALs/jenkins-hbase4.apache.org,36281,1689099446204, archiveDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/oldWALs, maxLogs=32 2023-07-11 18:17:26,535 INFO [Listener at localhost/41775] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=561, OpenFileDescriptor=827, MaxFileDescriptor=60000, SystemLoadAverage=546, ProcessCount=170, AvailableMemoryMB=4411 2023-07-11 18:17:26,535 WARN [Listener at localhost/41775] hbase.ResourceChecker(130): Thread=561 is superior to 500 2023-07-11 18:17:26,535 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-11 18:17:26,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:26,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:26,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:26,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:26,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:26,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:26,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:26,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:26,544 WARN [IPC Server handler 1 on default port 43601] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-07-11 18:17:26,544 WARN [IPC Server handler 1 on default port 43601] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-07-11 18:17:26,544 WARN [IPC Server handler 1 on default port 43601] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-07-11 18:17:26,552 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:35255,DS-c7742031-d772-4baf-97b2-c6c1d38992ba,DISK] 2023-07-11 18:17:26,552 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43963,DS-b95f6d85-07d1-4bf5-931c-7526c43d5d29,DISK] 2023-07-11 18:17:26,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:26,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:26,554 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:26,557 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:26,558 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:26,559 INFO [RS:3;jenkins-hbase4:36281] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/WALs/jenkins-hbase4.apache.org,36281,1689099446204/jenkins-hbase4.apache.org%2C36281%2C1689099446204.1689099446531 2023-07-11 18:17:26,559 DEBUG [RS:3;jenkins-hbase4:36281] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:35255,DS-c7742031-d772-4baf-97b2-c6c1d38992ba,DISK], DatanodeInfoWithStorage[127.0.0.1:43963,DS-b95f6d85-07d1-4bf5-931c-7526c43d5d29,DISK]] 2023-07-11 18:17:26,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:26,560 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:26,562 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:26,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:26,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:26,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:26,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38477] to rsgroup master 2023-07-11 18:17:26,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:26,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:44334 deadline: 1689100646570, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. 2023-07-11 18:17:26,571 WARN [Listener at localhost/41775] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:26,573 INFO [Listener at localhost/41775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:26,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:26,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:26,573 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33705, jenkins-hbase4.apache.org:36281, jenkins-hbase4.apache.org:37037, jenkins-hbase4.apache.org:41487], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:26,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:26,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:26,576 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 18:17:26,576 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-11 18:17:26,578 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 18:17:26,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-11 18:17:26,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-11 18:17:26,579 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:26,580 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:26,580 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:26,582 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-11 18:17:26,583 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp/data/default/t1/c2b58a0c15c229721d0fb7d066da347b 2023-07-11 18:17:26,584 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp/data/default/t1/c2b58a0c15c229721d0fb7d066da347b empty. 2023-07-11 18:17:26,584 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp/data/default/t1/c2b58a0c15c229721d0fb7d066da347b 2023-07-11 18:17:26,584 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-11 18:17:26,588 WARN [IPC Server handler 2 on default port 43601] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-07-11 18:17:26,588 WARN [IPC Server handler 2 on default port 43601] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-07-11 18:17:26,588 WARN [IPC Server handler 2 on default port 43601] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-07-11 18:17:26,594 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-11 18:17:26,595 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => c2b58a0c15c229721d0fb7d066da347b, NAME => 't1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp 2023-07-11 18:17:26,597 WARN [IPC Server handler 2 on default port 43601] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology 2023-07-11 18:17:26,597 WARN [IPC Server handler 2 on default port 43601] protocol.BlockStoragePolicy(161): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2023-07-11 18:17:26,597 WARN [IPC Server handler 2 on default port 43601] blockmanagement.BlockPlacementPolicyDefault(446): Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2023-07-11 18:17:26,601 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:26,601 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing c2b58a0c15c229721d0fb7d066da347b, disabling compactions & flushes 2023-07-11 18:17:26,601 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b. 2023-07-11 18:17:26,601 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b. 2023-07-11 18:17:26,601 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b. after waiting 0 ms 2023-07-11 18:17:26,602 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b. 2023-07-11 18:17:26,602 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b. 2023-07-11 18:17:26,602 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for c2b58a0c15c229721d0fb7d066da347b: 2023-07-11 18:17:26,604 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-11 18:17:26,604 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689099446604"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099446604"}]},"ts":"1689099446604"} 2023-07-11 18:17:26,606 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-11 18:17:26,606 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-11 18:17:26,606 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099446606"}]},"ts":"1689099446606"} 2023-07-11 18:17:26,607 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-11 18:17:26,610 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-11 18:17:26,610 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-11 18:17:26,610 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-11 18:17:26,610 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-11 18:17:26,610 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-11 18:17:26,610 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-11 18:17:26,611 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=c2b58a0c15c229721d0fb7d066da347b, ASSIGN}] 2023-07-11 18:17:26,611 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=c2b58a0c15c229721d0fb7d066da347b, ASSIGN 2023-07-11 18:17:26,612 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=c2b58a0c15c229721d0fb7d066da347b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41487,1689099444383; forceNewPlan=false, retain=false 2023-07-11 18:17:26,617 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-11 18:17:26,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-11 18:17:26,765 INFO [jenkins-hbase4:38477] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-11 18:17:26,766 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=c2b58a0c15c229721d0fb7d066da347b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:26,766 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689099446766"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099446766"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099446766"}]},"ts":"1689099446766"} 2023-07-11 18:17:26,768 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure c2b58a0c15c229721d0fb7d066da347b, server=jenkins-hbase4.apache.org,41487,1689099444383}] 2023-07-11 18:17:26,884 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-11 18:17:26,921 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:26,921 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-11 18:17:26,923 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43218, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-11 18:17:26,926 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b. 2023-07-11 18:17:26,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c2b58a0c15c229721d0fb7d066da347b, NAME => 't1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b.', STARTKEY => '', ENDKEY => ''} 2023-07-11 18:17:26,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 c2b58a0c15c229721d0fb7d066da347b 2023-07-11 18:17:26,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-11 18:17:26,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c2b58a0c15c229721d0fb7d066da347b 2023-07-11 18:17:26,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c2b58a0c15c229721d0fb7d066da347b 2023-07-11 18:17:26,928 INFO [StoreOpener-c2b58a0c15c229721d0fb7d066da347b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region c2b58a0c15c229721d0fb7d066da347b 2023-07-11 18:17:26,929 DEBUG [StoreOpener-c2b58a0c15c229721d0fb7d066da347b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/default/t1/c2b58a0c15c229721d0fb7d066da347b/cf1 2023-07-11 18:17:26,929 DEBUG [StoreOpener-c2b58a0c15c229721d0fb7d066da347b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/default/t1/c2b58a0c15c229721d0fb7d066da347b/cf1 2023-07-11 18:17:26,930 INFO [StoreOpener-c2b58a0c15c229721d0fb7d066da347b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c2b58a0c15c229721d0fb7d066da347b columnFamilyName cf1 2023-07-11 18:17:26,930 INFO [StoreOpener-c2b58a0c15c229721d0fb7d066da347b-1] regionserver.HStore(310): Store=c2b58a0c15c229721d0fb7d066da347b/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-11 18:17:26,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/default/t1/c2b58a0c15c229721d0fb7d066da347b 2023-07-11 18:17:26,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/default/t1/c2b58a0c15c229721d0fb7d066da347b 2023-07-11 18:17:26,934 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c2b58a0c15c229721d0fb7d066da347b 2023-07-11 18:17:26,936 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/default/t1/c2b58a0c15c229721d0fb7d066da347b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-11 18:17:26,937 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c2b58a0c15c229721d0fb7d066da347b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11657568640, jitterRate=0.08569568395614624}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-11 18:17:26,937 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c2b58a0c15c229721d0fb7d066da347b: 2023-07-11 18:17:26,938 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b., pid=14, masterSystemTime=1689099446921 2023-07-11 18:17:26,941 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b. 2023-07-11 18:17:26,942 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b. 2023-07-11 18:17:26,942 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=c2b58a0c15c229721d0fb7d066da347b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:26,942 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689099446942"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689099446942"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689099446942"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689099446942"}]},"ts":"1689099446942"} 2023-07-11 18:17:26,945 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-11 18:17:26,945 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure c2b58a0c15c229721d0fb7d066da347b, server=jenkins-hbase4.apache.org,41487,1689099444383 in 175 msec 2023-07-11 18:17:26,947 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-11 18:17:26,947 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=c2b58a0c15c229721d0fb7d066da347b, ASSIGN in 334 msec 2023-07-11 18:17:26,948 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-11 18:17:26,948 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099446948"}]},"ts":"1689099446948"} 2023-07-11 18:17:26,949 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-11 18:17:26,952 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-11 18:17:26,954 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 376 msec 2023-07-11 18:17:27,185 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-11 18:17:27,185 INFO [Listener at localhost/41775] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-11 18:17:27,185 DEBUG [Listener at localhost/41775] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-11 18:17:27,185 INFO [Listener at localhost/41775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:27,188 INFO [Listener at localhost/41775] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-11 18:17:27,188 INFO [Listener at localhost/41775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:27,188 INFO [Listener at localhost/41775] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-11 18:17:27,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-11 18:17:27,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-11 18:17:27,192 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-11 18:17:27,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-11 18:17:27,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 354 connection: 172.31.14.131:44334 deadline: 1689099507189, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-11 18:17:27,194 INFO [Listener at localhost/41775] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:27,195 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-11 18:17:27,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:27,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:27,296 INFO [Listener at localhost/41775] client.HBaseAdmin$15(890): Started disable of t1 2023-07-11 18:17:27,296 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-11 18:17:27,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-11 18:17:27,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-11 18:17:27,300 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099447300"}]},"ts":"1689099447300"} 2023-07-11 18:17:27,301 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-11 18:17:27,303 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-11 18:17:27,303 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=c2b58a0c15c229721d0fb7d066da347b, UNASSIGN}] 2023-07-11 18:17:27,304 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=c2b58a0c15c229721d0fb7d066da347b, UNASSIGN 2023-07-11 18:17:27,305 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=c2b58a0c15c229721d0fb7d066da347b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:27,305 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689099447305"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689099447305"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689099447305"}]},"ts":"1689099447305"} 2023-07-11 18:17:27,306 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure c2b58a0c15c229721d0fb7d066da347b, server=jenkins-hbase4.apache.org,41487,1689099444383}] 2023-07-11 18:17:27,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-11 18:17:27,457 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c2b58a0c15c229721d0fb7d066da347b 2023-07-11 18:17:27,458 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c2b58a0c15c229721d0fb7d066da347b, disabling compactions & flushes 2023-07-11 18:17:27,458 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b. 2023-07-11 18:17:27,458 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b. 2023-07-11 18:17:27,458 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b. after waiting 0 ms 2023-07-11 18:17:27,458 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b. 2023-07-11 18:17:27,461 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/default/t1/c2b58a0c15c229721d0fb7d066da347b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-11 18:17:27,462 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b. 2023-07-11 18:17:27,462 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c2b58a0c15c229721d0fb7d066da347b: 2023-07-11 18:17:27,464 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c2b58a0c15c229721d0fb7d066da347b 2023-07-11 18:17:27,464 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=c2b58a0c15c229721d0fb7d066da347b, regionState=CLOSED 2023-07-11 18:17:27,464 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689099447464"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689099447464"}]},"ts":"1689099447464"} 2023-07-11 18:17:27,467 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-11 18:17:27,467 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure c2b58a0c15c229721d0fb7d066da347b, server=jenkins-hbase4.apache.org,41487,1689099444383 in 159 msec 2023-07-11 18:17:27,468 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-11 18:17:27,468 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=c2b58a0c15c229721d0fb7d066da347b, UNASSIGN in 164 msec 2023-07-11 18:17:27,469 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689099447469"}]},"ts":"1689099447469"} 2023-07-11 18:17:27,470 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-11 18:17:27,472 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-11 18:17:27,473 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 175 msec 2023-07-11 18:17:27,602 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-11 18:17:27,602 INFO [Listener at localhost/41775] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-11 18:17:27,603 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-11 18:17:27,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-11 18:17:27,606 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-11 18:17:27,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-11 18:17:27,606 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-11 18:17:27,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:27,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:27,608 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:27,609 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp/data/default/t1/c2b58a0c15c229721d0fb7d066da347b 2023-07-11 18:17:27,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-11 18:17:27,611 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp/data/default/t1/c2b58a0c15c229721d0fb7d066da347b/cf1, FileablePath, hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp/data/default/t1/c2b58a0c15c229721d0fb7d066da347b/recovered.edits] 2023-07-11 18:17:27,616 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp/data/default/t1/c2b58a0c15c229721d0fb7d066da347b/recovered.edits/4.seqid to hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/archive/data/default/t1/c2b58a0c15c229721d0fb7d066da347b/recovered.edits/4.seqid 2023-07-11 18:17:27,617 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/.tmp/data/default/t1/c2b58a0c15c229721d0fb7d066da347b 2023-07-11 18:17:27,617 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-11 18:17:27,619 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-11 18:17:27,620 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-11 18:17:27,622 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-11 18:17:27,623 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-11 18:17:27,623 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-11 18:17:27,623 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689099447623"}]},"ts":"9223372036854775807"} 2023-07-11 18:17:27,624 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-11 18:17:27,624 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => c2b58a0c15c229721d0fb7d066da347b, NAME => 't1,,1689099446575.c2b58a0c15c229721d0fb7d066da347b.', STARTKEY => '', ENDKEY => ''}] 2023-07-11 18:17:27,624 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-11 18:17:27,624 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689099447624"}]},"ts":"9223372036854775807"} 2023-07-11 18:17:27,625 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-11 18:17:27,628 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-11 18:17:27,629 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 25 msec 2023-07-11 18:17:27,711 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-11 18:17:27,712 INFO [Listener at localhost/41775] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-11 18:17:27,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:27,715 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:27,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:27,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:27,716 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:27,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:27,717 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:27,718 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:27,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:27,721 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:27,726 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:27,728 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:27,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:27,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:27,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:27,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:27,734 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:27,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:27,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:27,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38477] to rsgroup master 2023-07-11 18:17:27,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:27,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:44334 deadline: 1689100647738, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. 2023-07-11 18:17:27,738 WARN [Listener at localhost/41775] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:27,742 INFO [Listener at localhost/41775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:27,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:27,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:27,743 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33705, jenkins-hbase4.apache.org:36281, jenkins-hbase4.apache.org:37037, jenkins-hbase4.apache.org:41487], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:27,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:27,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:27,761 INFO [Listener at localhost/41775] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=574 (was 561) - Thread LEAK? -, OpenFileDescriptor=831 (was 827) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=546 (was 546), ProcessCount=170 (was 170), AvailableMemoryMB=4403 (was 4411) 2023-07-11 18:17:27,761 WARN [Listener at localhost/41775] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-11 18:17:27,777 INFO [Listener at localhost/41775] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=574, OpenFileDescriptor=831, MaxFileDescriptor=60000, SystemLoadAverage=546, ProcessCount=170, AvailableMemoryMB=4402 2023-07-11 18:17:27,778 WARN [Listener at localhost/41775] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-11 18:17:27,778 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-11 18:17:27,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:27,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:27,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:27,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:27,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:27,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:27,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:27,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:27,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:27,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:27,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:27,791 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:27,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:27,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:27,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:27,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:27,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:27,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:27,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:27,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38477] to rsgroup master 2023-07-11 18:17:27,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:27,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44334 deadline: 1689100647800, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. 2023-07-11 18:17:27,801 WARN [Listener at localhost/41775] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:27,802 INFO [Listener at localhost/41775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:27,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:27,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:27,803 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33705, jenkins-hbase4.apache.org:36281, jenkins-hbase4.apache.org:37037, jenkins-hbase4.apache.org:41487], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:27,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:27,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:27,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-11 18:17:27,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 18:17:27,805 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-11 18:17:27,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-11 18:17:27,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-11 18:17:27,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:27,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:27,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:27,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:27,814 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:27,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:27,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:27,815 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:27,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:27,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:27,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:27,823 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:27,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:27,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:27,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:27,827 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:27,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:27,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:27,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:27,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38477] to rsgroup master 2023-07-11 18:17:27,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:27,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44334 deadline: 1689100647832, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. 2023-07-11 18:17:27,832 WARN [Listener at localhost/41775] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:27,834 INFO [Listener at localhost/41775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:27,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:27,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:27,835 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33705, jenkins-hbase4.apache.org:36281, jenkins-hbase4.apache.org:37037, jenkins-hbase4.apache.org:41487], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:27,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:27,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:27,854 INFO [Listener at localhost/41775] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=576 (was 574) - Thread LEAK? -, OpenFileDescriptor=831 (was 831), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=546 (was 546), ProcessCount=170 (was 170), AvailableMemoryMB=4401 (was 4402) 2023-07-11 18:17:27,854 WARN [Listener at localhost/41775] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-11 18:17:27,870 INFO [Listener at localhost/41775] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=576, OpenFileDescriptor=831, MaxFileDescriptor=60000, SystemLoadAverage=546, ProcessCount=170, AvailableMemoryMB=4401 2023-07-11 18:17:27,871 WARN [Listener at localhost/41775] hbase.ResourceChecker(130): Thread=576 is superior to 500 2023-07-11 18:17:27,871 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-11 18:17:27,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:27,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:27,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:27,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:27,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:27,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:27,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:27,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:27,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:27,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:27,881 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:27,883 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:27,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:27,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:27,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:27,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:27,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:27,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:27,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:27,893 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38477] to rsgroup master 2023-07-11 18:17:27,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:27,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44334 deadline: 1689100647893, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. 2023-07-11 18:17:27,893 WARN [Listener at localhost/41775] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:27,895 INFO [Listener at localhost/41775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:27,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:27,895 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:27,896 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33705, jenkins-hbase4.apache.org:36281, jenkins-hbase4.apache.org:37037, jenkins-hbase4.apache.org:41487], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:27,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:27,896 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:27,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:27,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:27,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:27,900 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:27,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:27,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:27,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:27,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:27,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:27,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:27,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:27,909 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:27,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:27,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:27,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:27,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:27,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:27,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:27,917 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:27,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38477] to rsgroup master 2023-07-11 18:17:27,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:27,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44334 deadline: 1689100647919, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. 2023-07-11 18:17:27,919 WARN [Listener at localhost/41775] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:27,921 INFO [Listener at localhost/41775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:27,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:27,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:27,922 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33705, jenkins-hbase4.apache.org:36281, jenkins-hbase4.apache.org:37037, jenkins-hbase4.apache.org:41487], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:27,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:27,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:27,943 INFO [Listener at localhost/41775] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=577 (was 576) - Thread LEAK? -, OpenFileDescriptor=831 (was 831), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=546 (was 546), ProcessCount=170 (was 170), AvailableMemoryMB=4401 (was 4401) 2023-07-11 18:17:27,943 WARN [Listener at localhost/41775] hbase.ResourceChecker(130): Thread=577 is superior to 500 2023-07-11 18:17:27,962 INFO [Listener at localhost/41775] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=577, OpenFileDescriptor=831, MaxFileDescriptor=60000, SystemLoadAverage=546, ProcessCount=170, AvailableMemoryMB=4400 2023-07-11 18:17:27,962 WARN [Listener at localhost/41775] hbase.ResourceChecker(130): Thread=577 is superior to 500 2023-07-11 18:17:27,962 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-11 18:17:27,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:27,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:27,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:27,967 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:27,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:27,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:27,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:27,969 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:27,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:27,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:27,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:27,976 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:27,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:27,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:27,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:27,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:27,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:27,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:27,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:27,989 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38477] to rsgroup master 2023-07-11 18:17:27,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:27,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44334 deadline: 1689100647989, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. 2023-07-11 18:17:27,990 WARN [Listener at localhost/41775] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:27,992 INFO [Listener at localhost/41775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:27,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:27,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:27,993 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33705, jenkins-hbase4.apache.org:36281, jenkins-hbase4.apache.org:37037, jenkins-hbase4.apache.org:41487], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:27,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:27,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:27,994 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-11 18:17:27,995 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-11 18:17:27,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-11 18:17:27,998 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:27,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:27,999 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-11 18:17:28,001 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:28,003 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:28,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:28,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-11 18:17:28,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-11 18:17:28,012 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-11 18:17:28,016 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 18:17:28,019 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 11 msec 2023-07-11 18:17:28,113 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-11 18:17:28,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-11 18:17:28,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:28,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:44334 deadline: 1689100648114, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-11 18:17:28,122 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-11 18:17:28,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-11 18:17:28,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-11 18:17:28,136 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-11 18:17:28,137 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 14 msec 2023-07-11 18:17:28,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-11 18:17:28,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-11 18:17:28,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-11 18:17:28,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:28,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-11 18:17:28,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:28,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-11 18:17:28,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:28,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:28,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:28,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-11 18:17:28,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-11 18:17:28,259 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-11 18:17:28,262 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-11 18:17:28,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-11 18:17:28,264 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-11 18:17:28,265 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-11 18:17:28,265 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-11 18:17:28,266 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-11 18:17:28,268 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-11 18:17:28,269 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 12 msec 2023-07-11 18:17:28,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-11 18:17:28,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-11 18:17:28,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-11 18:17:28,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:28,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:28,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-11 18:17:28,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:28,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:28,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:28,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:28,375 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:44334 deadline: 1689099508375, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-11 18:17:28,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:28,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:28,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:28,379 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:28,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:28,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:28,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:28,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-11 18:17:28,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:28,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:28,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-11 18:17:28,385 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:28,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-11 18:17:28,386 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-11 18:17:28,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-11 18:17:28,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-11 18:17:28,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-11 18:17:28,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-11 18:17:28,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:28,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-11 18:17:28,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-11 18:17:28,395 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-11 18:17:28,396 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-11 18:17:28,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-11 18:17:28,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-11 18:17:28,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-11 18:17:28,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-11 18:17:28,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:28,402 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:28,404 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38477] to rsgroup master 2023-07-11 18:17:28,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-11 18:17:28,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:44334 deadline: 1689100648404, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. 2023-07-11 18:17:28,405 WARN [Listener at localhost/41775] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor58.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:38477 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-11 18:17:28,406 INFO [Listener at localhost/41775] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-11 18:17:28,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-11 18:17:28,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-11 18:17:28,407 INFO [Listener at localhost/41775] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33705, jenkins-hbase4.apache.org:36281, jenkins-hbase4.apache.org:37037, jenkins-hbase4.apache.org:41487], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-11 18:17:28,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-11 18:17:28,408 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38477] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-11 18:17:28,427 INFO [Listener at localhost/41775] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=577 (was 577), OpenFileDescriptor=831 (was 831), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=546 (was 546), ProcessCount=170 (was 170), AvailableMemoryMB=4405 (was 4400) - AvailableMemoryMB LEAK? - 2023-07-11 18:17:28,428 WARN [Listener at localhost/41775] hbase.ResourceChecker(130): Thread=577 is superior to 500 2023-07-11 18:17:28,428 INFO [Listener at localhost/41775] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-11 18:17:28,428 INFO [Listener at localhost/41775] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-11 18:17:28,428 DEBUG [Listener at localhost/41775] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0c6b1b47 to 127.0.0.1:50731 2023-07-11 18:17:28,428 DEBUG [Listener at localhost/41775] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:28,428 DEBUG [Listener at localhost/41775] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-11 18:17:28,428 DEBUG [Listener at localhost/41775] util.JVMClusterUtil(257): Found active master hash=1978614382, stopped=false 2023-07-11 18:17:28,428 DEBUG [Listener at localhost/41775] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-11 18:17:28,428 DEBUG [Listener at localhost/41775] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-11 18:17:28,428 INFO [Listener at localhost/41775] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,38477,1689099443878 2023-07-11 18:17:28,436 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:28,436 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:28,436 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:36281-0x101559aa18f000b, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:28,436 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:28,436 INFO [Listener at localhost/41775] procedure2.ProcedureExecutor(629): Stopping 2023-07-11 18:17:28,436 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-11 18:17:28,436 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:28,436 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:28,437 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36281-0x101559aa18f000b, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:28,436 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:28,437 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:28,437 DEBUG [Listener at localhost/41775] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x488cee0b to 127.0.0.1:50731 2023-07-11 18:17:28,437 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-11 18:17:28,437 DEBUG [Listener at localhost/41775] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:28,437 INFO [Listener at localhost/41775] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33705,1689099444082' ***** 2023-07-11 18:17:28,437 INFO [Listener at localhost/41775] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 18:17:28,438 INFO [Listener at localhost/41775] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37037,1689099444235' ***** 2023-07-11 18:17:28,438 INFO [Listener at localhost/41775] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 18:17:28,438 INFO [RS:0;jenkins-hbase4:33705] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 18:17:28,438 INFO [RS:1;jenkins-hbase4:37037] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 18:17:28,438 INFO [Listener at localhost/41775] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41487,1689099444383' ***** 2023-07-11 18:17:28,438 INFO [Listener at localhost/41775] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 18:17:28,438 INFO [Listener at localhost/41775] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36281,1689099446204' ***** 2023-07-11 18:17:28,439 INFO [Listener at localhost/41775] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-11 18:17:28,439 INFO [RS:3;jenkins-hbase4:36281] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 18:17:28,439 INFO [RS:2;jenkins-hbase4:41487] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 18:17:28,445 INFO [RS:0;jenkins-hbase4:33705] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@29a4f6d2{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:17:28,445 INFO [RS:3;jenkins-hbase4:36281] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2c9d9022{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:17:28,445 INFO [RS:2;jenkins-hbase4:41487] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@68e7269f{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:17:28,445 INFO [RS:1;jenkins-hbase4:37037] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4814edce{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-11 18:17:28,445 INFO [RS:0;jenkins-hbase4:33705] server.AbstractConnector(383): Stopped ServerConnector@2034b22a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 18:17:28,446 INFO [RS:3;jenkins-hbase4:36281] server.AbstractConnector(383): Stopped ServerConnector@67ada82a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 18:17:28,446 INFO [RS:2;jenkins-hbase4:41487] server.AbstractConnector(383): Stopped ServerConnector@3ed2d6d6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 18:17:28,446 INFO [RS:0;jenkins-hbase4:33705] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 18:17:28,446 INFO [RS:2;jenkins-hbase4:41487] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 18:17:28,446 INFO [RS:1;jenkins-hbase4:37037] server.AbstractConnector(383): Stopped ServerConnector@35af6de7{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 18:17:28,446 INFO [RS:3;jenkins-hbase4:36281] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 18:17:28,447 INFO [RS:1;jenkins-hbase4:37037] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 18:17:28,447 INFO [RS:0;jenkins-hbase4:33705] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@33dbfe01{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 18:17:28,448 INFO [RS:3;jenkins-hbase4:36281] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@e1fbe15{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 18:17:28,447 INFO [RS:2;jenkins-hbase4:41487] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6b3604ec{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 18:17:28,450 INFO [RS:3;jenkins-hbase4:36281] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@17ccdbfb{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/hadoop.log.dir/,STOPPED} 2023-07-11 18:17:28,449 INFO [RS:0;jenkins-hbase4:33705] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@dfe7868{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/hadoop.log.dir/,STOPPED} 2023-07-11 18:17:28,449 INFO [RS:1;jenkins-hbase4:37037] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6015bda8{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 18:17:28,450 INFO [RS:2;jenkins-hbase4:41487] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@370f144d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/hadoop.log.dir/,STOPPED} 2023-07-11 18:17:28,451 INFO [RS:1;jenkins-hbase4:37037] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6fdedd33{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/hadoop.log.dir/,STOPPED} 2023-07-11 18:17:28,452 INFO [RS:3;jenkins-hbase4:36281] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 18:17:28,452 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 18:17:28,452 INFO [RS:3;jenkins-hbase4:36281] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 18:17:28,452 INFO [RS:3;jenkins-hbase4:36281] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 18:17:28,452 INFO [RS:3;jenkins-hbase4:36281] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36281,1689099446204 2023-07-11 18:17:28,452 DEBUG [RS:3;jenkins-hbase4:36281] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x664981a1 to 127.0.0.1:50731 2023-07-11 18:17:28,452 INFO [RS:2;jenkins-hbase4:41487] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 18:17:28,452 INFO [RS:0;jenkins-hbase4:33705] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 18:17:28,452 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 18:17:28,452 INFO [RS:0;jenkins-hbase4:33705] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 18:17:28,453 INFO [RS:0;jenkins-hbase4:33705] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 18:17:28,453 INFO [RS:0;jenkins-hbase4:33705] regionserver.HRegionServer(3305): Received CLOSE for e31fea5d4bfe5b3b2aebd24b6c92d752 2023-07-11 18:17:28,453 INFO [RS:1;jenkins-hbase4:37037] regionserver.HeapMemoryManager(220): Stopping 2023-07-11 18:17:28,453 INFO [RS:1;jenkins-hbase4:37037] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 18:17:28,453 INFO [RS:1;jenkins-hbase4:37037] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 18:17:28,452 DEBUG [RS:3;jenkins-hbase4:36281] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:28,453 INFO [RS:1;jenkins-hbase4:37037] regionserver.HRegionServer(3305): Received CLOSE for 0b656c866d17506671369550ab5ca4ba 2023-07-11 18:17:28,453 INFO [RS:3;jenkins-hbase4:36281] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36281,1689099446204; all regions closed. 2023-07-11 18:17:28,453 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 18:17:28,452 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-11 18:17:28,453 INFO [RS:0;jenkins-hbase4:33705] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33705,1689099444082 2023-07-11 18:17:28,452 INFO [RS:2;jenkins-hbase4:41487] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-11 18:17:28,453 INFO [RS:2;jenkins-hbase4:41487] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-11 18:17:28,453 INFO [RS:2;jenkins-hbase4:41487] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:28,453 DEBUG [RS:2;jenkins-hbase4:41487] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x05f70b4e to 127.0.0.1:50731 2023-07-11 18:17:28,454 DEBUG [RS:2;jenkins-hbase4:41487] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:28,454 INFO [RS:2;jenkins-hbase4:41487] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41487,1689099444383; all regions closed. 2023-07-11 18:17:28,453 DEBUG [RS:0;jenkins-hbase4:33705] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2f6c26b1 to 127.0.0.1:50731 2023-07-11 18:17:28,454 DEBUG [RS:0;jenkins-hbase4:33705] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:28,454 INFO [RS:0;jenkins-hbase4:33705] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 18:17:28,454 INFO [RS:0;jenkins-hbase4:33705] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 18:17:28,454 INFO [RS:0;jenkins-hbase4:33705] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 18:17:28,454 INFO [RS:0;jenkins-hbase4:33705] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-11 18:17:28,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e31fea5d4bfe5b3b2aebd24b6c92d752, disabling compactions & flushes 2023-07-11 18:17:28,454 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752. 2023-07-11 18:17:28,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752. 2023-07-11 18:17:28,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752. after waiting 0 ms 2023-07-11 18:17:28,454 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752. 2023-07-11 18:17:28,454 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e31fea5d4bfe5b3b2aebd24b6c92d752 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-11 18:17:28,454 INFO [RS:1;jenkins-hbase4:37037] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37037,1689099444235 2023-07-11 18:17:28,455 DEBUG [RS:1;jenkins-hbase4:37037] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3b5fec2a to 127.0.0.1:50731 2023-07-11 18:17:28,455 DEBUG [RS:1;jenkins-hbase4:37037] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:28,455 INFO [RS:1;jenkins-hbase4:37037] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-11 18:17:28,455 DEBUG [RS:1;jenkins-hbase4:37037] regionserver.HRegionServer(1478): Online Regions={0b656c866d17506671369550ab5ca4ba=hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba.} 2023-07-11 18:17:28,455 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0b656c866d17506671369550ab5ca4ba, disabling compactions & flushes 2023-07-11 18:17:28,455 DEBUG [RS:1;jenkins-hbase4:37037] regionserver.HRegionServer(1504): Waiting on 0b656c866d17506671369550ab5ca4ba 2023-07-11 18:17:28,455 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba. 2023-07-11 18:17:28,455 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba. 2023-07-11 18:17:28,455 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba. after waiting 0 ms 2023-07-11 18:17:28,455 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba. 2023-07-11 18:17:28,455 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 0b656c866d17506671369550ab5ca4ba 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-11 18:17:28,455 INFO [RS:0;jenkins-hbase4:33705] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-11 18:17:28,455 DEBUG [RS:0;jenkins-hbase4:33705] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, e31fea5d4bfe5b3b2aebd24b6c92d752=hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752.} 2023-07-11 18:17:28,455 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-11 18:17:28,455 DEBUG [RS:0;jenkins-hbase4:33705] regionserver.HRegionServer(1504): Waiting on 1588230740, e31fea5d4bfe5b3b2aebd24b6c92d752 2023-07-11 18:17:28,455 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-11 18:17:28,455 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-11 18:17:28,455 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-11 18:17:28,455 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-11 18:17:28,456 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-11 18:17:28,466 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:28,466 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:28,466 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:28,475 DEBUG [RS:3;jenkins-hbase4:36281] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/oldWALs 2023-07-11 18:17:28,475 INFO [RS:3;jenkins-hbase4:36281] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36281%2C1689099446204:(num 1689099446531) 2023-07-11 18:17:28,475 DEBUG [RS:3;jenkins-hbase4:36281] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:28,475 INFO [RS:3;jenkins-hbase4:36281] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:28,476 DEBUG [RS:2;jenkins-hbase4:41487] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/oldWALs 2023-07-11 18:17:28,476 INFO [RS:2;jenkins-hbase4:41487] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41487%2C1689099444383:(num 1689099445389) 2023-07-11 18:17:28,476 DEBUG [RS:2;jenkins-hbase4:41487] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:28,476 INFO [RS:2;jenkins-hbase4:41487] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:28,479 INFO [RS:3;jenkins-hbase4:36281] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-11 18:17:28,479 INFO [RS:3;jenkins-hbase4:36281] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 18:17:28,479 INFO [RS:3;jenkins-hbase4:36281] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 18:17:28,479 INFO [RS:3;jenkins-hbase4:36281] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 18:17:28,479 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 18:17:28,483 INFO [RS:2;jenkins-hbase4:41487] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-11 18:17:28,483 INFO [RS:2;jenkins-hbase4:41487] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 18:17:28,483 INFO [RS:2;jenkins-hbase4:41487] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 18:17:28,483 INFO [RS:2;jenkins-hbase4:41487] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 18:17:28,483 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 18:17:28,484 INFO [RS:3;jenkins-hbase4:36281] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36281 2023-07-11 18:17:28,485 INFO [RS:2;jenkins-hbase4:41487] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41487 2023-07-11 18:17:28,487 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36281,1689099446204 2023-07-11 18:17:28,487 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36281,1689099446204 2023-07-11 18:17:28,487 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:28,487 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:28,487 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:28,487 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36281,1689099446204 2023-07-11 18:17:28,487 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:28,487 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:36281-0x101559aa18f000b, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36281,1689099446204 2023-07-11 18:17:28,488 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:36281-0x101559aa18f000b, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:28,488 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:28,488 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:36281-0x101559aa18f000b, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:28,488 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:28,488 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41487,1689099444383 2023-07-11 18:17:28,488 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36281,1689099446204] 2023-07-11 18:17:28,488 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36281,1689099446204; numProcessing=1 2023-07-11 18:17:28,490 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36281,1689099446204 already deleted, retry=false 2023-07-11 18:17:28,490 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36281,1689099446204 expired; onlineServers=3 2023-07-11 18:17:28,490 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41487,1689099444383] 2023-07-11 18:17:28,490 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41487,1689099444383; numProcessing=2 2023-07-11 18:17:28,491 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41487,1689099444383 already deleted, retry=false 2023-07-11 18:17:28,491 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41487,1689099444383 expired; onlineServers=2 2023-07-11 18:17:28,504 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:28,507 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/namespace/0b656c866d17506671369550ab5ca4ba/.tmp/info/7ab41643037448f4920e87dbf93fbdfb 2023-07-11 18:17:28,507 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/.tmp/info/892aa953fe8442d3888587be348965fb 2023-07-11 18:17:28,512 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/rsgroup/e31fea5d4bfe5b3b2aebd24b6c92d752/.tmp/m/96cf37aa92114adb98eb2640482f8afc 2023-07-11 18:17:28,517 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 892aa953fe8442d3888587be348965fb 2023-07-11 18:17:28,524 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7ab41643037448f4920e87dbf93fbdfb 2023-07-11 18:17:28,524 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/namespace/0b656c866d17506671369550ab5ca4ba/.tmp/info/7ab41643037448f4920e87dbf93fbdfb as hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/namespace/0b656c866d17506671369550ab5ca4ba/info/7ab41643037448f4920e87dbf93fbdfb 2023-07-11 18:17:28,533 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 96cf37aa92114adb98eb2640482f8afc 2023-07-11 18:17:28,535 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/rsgroup/e31fea5d4bfe5b3b2aebd24b6c92d752/.tmp/m/96cf37aa92114adb98eb2640482f8afc as hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/rsgroup/e31fea5d4bfe5b3b2aebd24b6c92d752/m/96cf37aa92114adb98eb2640482f8afc 2023-07-11 18:17:28,537 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7ab41643037448f4920e87dbf93fbdfb 2023-07-11 18:17:28,537 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/namespace/0b656c866d17506671369550ab5ca4ba/info/7ab41643037448f4920e87dbf93fbdfb, entries=3, sequenceid=9, filesize=5.0 K 2023-07-11 18:17:28,538 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 0b656c866d17506671369550ab5ca4ba in 83ms, sequenceid=9, compaction requested=false 2023-07-11 18:17:28,540 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 96cf37aa92114adb98eb2640482f8afc 2023-07-11 18:17:28,540 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/rsgroup/e31fea5d4bfe5b3b2aebd24b6c92d752/m/96cf37aa92114adb98eb2640482f8afc, entries=12, sequenceid=29, filesize=5.4 K 2023-07-11 18:17:28,541 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for e31fea5d4bfe5b3b2aebd24b6c92d752 in 87ms, sequenceid=29, compaction requested=false 2023-07-11 18:17:28,555 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/namespace/0b656c866d17506671369550ab5ca4ba/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-11 18:17:28,557 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba. 2023-07-11 18:17:28,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0b656c866d17506671369550ab5ca4ba: 2023-07-11 18:17:28,557 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/.tmp/rep_barrier/e70cb5d123c949118e4ec84f9779f32b 2023-07-11 18:17:28,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689099445644.0b656c866d17506671369550ab5ca4ba. 2023-07-11 18:17:28,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/rsgroup/e31fea5d4bfe5b3b2aebd24b6c92d752/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-11 18:17:28,558 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 18:17:28,558 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752. 2023-07-11 18:17:28,559 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e31fea5d4bfe5b3b2aebd24b6c92d752: 2023-07-11 18:17:28,559 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689099445706.e31fea5d4bfe5b3b2aebd24b6c92d752. 2023-07-11 18:17:28,563 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e70cb5d123c949118e4ec84f9779f32b 2023-07-11 18:17:28,577 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/.tmp/table/bf07b5e933464eb2822af7fd28d861ad 2023-07-11 18:17:28,584 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bf07b5e933464eb2822af7fd28d861ad 2023-07-11 18:17:28,585 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/.tmp/info/892aa953fe8442d3888587be348965fb as hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/info/892aa953fe8442d3888587be348965fb 2023-07-11 18:17:28,590 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 892aa953fe8442d3888587be348965fb 2023-07-11 18:17:28,591 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/info/892aa953fe8442d3888587be348965fb, entries=22, sequenceid=26, filesize=7.3 K 2023-07-11 18:17:28,591 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/.tmp/rep_barrier/e70cb5d123c949118e4ec84f9779f32b as hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/rep_barrier/e70cb5d123c949118e4ec84f9779f32b 2023-07-11 18:17:28,597 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e70cb5d123c949118e4ec84f9779f32b 2023-07-11 18:17:28,597 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/rep_barrier/e70cb5d123c949118e4ec84f9779f32b, entries=1, sequenceid=26, filesize=4.9 K 2023-07-11 18:17:28,598 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/.tmp/table/bf07b5e933464eb2822af7fd28d861ad as hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/table/bf07b5e933464eb2822af7fd28d861ad 2023-07-11 18:17:28,603 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for bf07b5e933464eb2822af7fd28d861ad 2023-07-11 18:17:28,604 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/table/bf07b5e933464eb2822af7fd28d861ad, entries=6, sequenceid=26, filesize=5.1 K 2023-07-11 18:17:28,605 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 148ms, sequenceid=26, compaction requested=false 2023-07-11 18:17:28,619 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-11 18:17:28,619 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-11 18:17:28,620 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-11 18:17:28,620 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-11 18:17:28,620 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-11 18:17:28,632 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:28,632 INFO [RS:2;jenkins-hbase4:41487] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41487,1689099444383; zookeeper connection closed. 2023-07-11 18:17:28,632 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:41487-0x101559aa18f0003, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:28,632 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@f61a7e4] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@f61a7e4 2023-07-11 18:17:28,655 INFO [RS:1;jenkins-hbase4:37037] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37037,1689099444235; all regions closed. 2023-07-11 18:17:28,655 INFO [RS:0;jenkins-hbase4:33705] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33705,1689099444082; all regions closed. 2023-07-11 18:17:28,662 DEBUG [RS:0;jenkins-hbase4:33705] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/oldWALs 2023-07-11 18:17:28,662 INFO [RS:0;jenkins-hbase4:33705] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33705%2C1689099444082.meta:.meta(num 1689099445587) 2023-07-11 18:17:28,666 DEBUG [RS:1;jenkins-hbase4:37037] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/oldWALs 2023-07-11 18:17:28,666 INFO [RS:1;jenkins-hbase4:37037] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37037%2C1689099444235:(num 1689099445389) 2023-07-11 18:17:28,666 DEBUG [RS:1;jenkins-hbase4:37037] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:28,666 INFO [RS:1;jenkins-hbase4:37037] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:28,667 INFO [RS:1;jenkins-hbase4:37037] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-11 18:17:28,667 INFO [RS:1;jenkins-hbase4:37037] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-11 18:17:28,667 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 18:17:28,667 INFO [RS:1;jenkins-hbase4:37037] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-11 18:17:28,667 DEBUG [RS:0;jenkins-hbase4:33705] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/oldWALs 2023-07-11 18:17:28,667 INFO [RS:1;jenkins-hbase4:37037] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-11 18:17:28,667 INFO [RS:0;jenkins-hbase4:33705] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33705%2C1689099444082:(num 1689099445389) 2023-07-11 18:17:28,667 DEBUG [RS:0;jenkins-hbase4:33705] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:28,667 INFO [RS:0;jenkins-hbase4:33705] regionserver.LeaseManager(133): Closed leases 2023-07-11 18:17:28,668 INFO [RS:1;jenkins-hbase4:37037] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37037 2023-07-11 18:17:28,668 INFO [RS:0;jenkins-hbase4:33705] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-11 18:17:28,668 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 18:17:28,669 INFO [RS:0;jenkins-hbase4:33705] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33705 2023-07-11 18:17:28,672 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37037,1689099444235 2023-07-11 18:17:28,672 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-11 18:17:28,672 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37037,1689099444235 2023-07-11 18:17:28,673 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33705,1689099444082 2023-07-11 18:17:28,673 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33705,1689099444082 2023-07-11 18:17:28,673 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37037,1689099444235] 2023-07-11 18:17:28,674 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37037,1689099444235; numProcessing=3 2023-07-11 18:17:28,677 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37037,1689099444235 already deleted, retry=false 2023-07-11 18:17:28,677 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37037,1689099444235 expired; onlineServers=1 2023-07-11 18:17:28,677 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33705,1689099444082] 2023-07-11 18:17:28,677 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33705,1689099444082; numProcessing=4 2023-07-11 18:17:28,678 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33705,1689099444082 already deleted, retry=false 2023-07-11 18:17:28,678 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33705,1689099444082 expired; onlineServers=0 2023-07-11 18:17:28,678 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38477,1689099443878' ***** 2023-07-11 18:17:28,678 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-11 18:17:28,679 DEBUG [M:0;jenkins-hbase4:38477] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@71d0d80b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-11 18:17:28,679 INFO [M:0;jenkins-hbase4:38477] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-11 18:17:28,681 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-11 18:17:28,681 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-11 18:17:28,681 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-11 18:17:28,682 INFO [M:0;jenkins-hbase4:38477] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@17d53e68{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-11 18:17:28,682 INFO [M:0;jenkins-hbase4:38477] server.AbstractConnector(383): Stopped ServerConnector@1f0d357d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 18:17:28,682 INFO [M:0;jenkins-hbase4:38477] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-11 18:17:28,683 INFO [M:0;jenkins-hbase4:38477] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@11dbe345{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-11 18:17:28,683 INFO [M:0;jenkins-hbase4:38477] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@20f20edd{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/hadoop.log.dir/,STOPPED} 2023-07-11 18:17:28,683 INFO [M:0;jenkins-hbase4:38477] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38477,1689099443878 2023-07-11 18:17:28,684 INFO [M:0;jenkins-hbase4:38477] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38477,1689099443878; all regions closed. 2023-07-11 18:17:28,684 DEBUG [M:0;jenkins-hbase4:38477] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-11 18:17:28,684 INFO [M:0;jenkins-hbase4:38477] master.HMaster(1491): Stopping master jetty server 2023-07-11 18:17:28,684 INFO [M:0;jenkins-hbase4:38477] server.AbstractConnector(383): Stopped ServerConnector@2a39c821{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-11 18:17:28,684 DEBUG [M:0;jenkins-hbase4:38477] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-11 18:17:28,684 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-11 18:17:28,684 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689099445124] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689099445124,5,FailOnTimeoutGroup] 2023-07-11 18:17:28,684 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689099445116] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689099445116,5,FailOnTimeoutGroup] 2023-07-11 18:17:28,684 DEBUG [M:0;jenkins-hbase4:38477] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-11 18:17:28,685 INFO [M:0;jenkins-hbase4:38477] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-11 18:17:28,685 INFO [M:0;jenkins-hbase4:38477] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-11 18:17:28,685 INFO [M:0;jenkins-hbase4:38477] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-11 18:17:28,685 DEBUG [M:0;jenkins-hbase4:38477] master.HMaster(1512): Stopping service threads 2023-07-11 18:17:28,685 INFO [M:0;jenkins-hbase4:38477] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-11 18:17:28,685 ERROR [M:0;jenkins-hbase4:38477] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-11 18:17:28,685 INFO [M:0;jenkins-hbase4:38477] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-11 18:17:28,685 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-11 18:17:28,686 DEBUG [M:0;jenkins-hbase4:38477] zookeeper.ZKUtil(398): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-11 18:17:28,686 WARN [M:0;jenkins-hbase4:38477] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-11 18:17:28,686 INFO [M:0;jenkins-hbase4:38477] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-11 18:17:28,686 INFO [M:0;jenkins-hbase4:38477] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-11 18:17:28,686 DEBUG [M:0;jenkins-hbase4:38477] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-11 18:17:28,686 INFO [M:0;jenkins-hbase4:38477] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:28,686 DEBUG [M:0;jenkins-hbase4:38477] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:28,686 DEBUG [M:0;jenkins-hbase4:38477] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-11 18:17:28,686 DEBUG [M:0;jenkins-hbase4:38477] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:28,686 INFO [M:0;jenkins-hbase4:38477] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.24 KB heapSize=90.66 KB 2023-07-11 18:17:28,696 INFO [M:0;jenkins-hbase4:38477] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.24 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/27ea402606a14a8e8e35d4fcb68376e0 2023-07-11 18:17:28,701 DEBUG [M:0;jenkins-hbase4:38477] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/27ea402606a14a8e8e35d4fcb68376e0 as hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/27ea402606a14a8e8e35d4fcb68376e0 2023-07-11 18:17:28,706 INFO [M:0;jenkins-hbase4:38477] regionserver.HStore(1080): Added hdfs://localhost:43601/user/jenkins/test-data/5edbd682-c62d-079e-986f-280a0b800845/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/27ea402606a14a8e8e35d4fcb68376e0, entries=22, sequenceid=175, filesize=11.1 K 2023-07-11 18:17:28,710 INFO [M:0;jenkins-hbase4:38477] regionserver.HRegion(2948): Finished flush of dataSize ~76.24 KB/78067, heapSize ~90.65 KB/92824, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=175, compaction requested=false 2023-07-11 18:17:28,712 INFO [M:0;jenkins-hbase4:38477] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-11 18:17:28,712 DEBUG [M:0;jenkins-hbase4:38477] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-11 18:17:28,715 INFO [M:0;jenkins-hbase4:38477] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-11 18:17:28,715 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-11 18:17:28,716 INFO [M:0;jenkins-hbase4:38477] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38477 2023-07-11 18:17:28,717 DEBUG [M:0;jenkins-hbase4:38477] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,38477,1689099443878 already deleted, retry=false 2023-07-11 18:17:28,732 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:36281-0x101559aa18f000b, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:28,732 INFO [RS:3;jenkins-hbase4:36281] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36281,1689099446204; zookeeper connection closed. 2023-07-11 18:17:28,732 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:36281-0x101559aa18f000b, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:28,732 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2d8874bc] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2d8874bc 2023-07-11 18:17:29,334 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:29,334 INFO [M:0;jenkins-hbase4:38477] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38477,1689099443878; zookeeper connection closed. 2023-07-11 18:17:29,334 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): master:38477-0x101559aa18f0000, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:29,434 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:29,434 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:33705-0x101559aa18f0001, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:29,434 INFO [RS:0;jenkins-hbase4:33705] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33705,1689099444082; zookeeper connection closed. 2023-07-11 18:17:29,434 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@26fc81bf] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@26fc81bf 2023-07-11 18:17:29,534 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:29,534 INFO [RS:1;jenkins-hbase4:37037] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37037,1689099444235; zookeeper connection closed. 2023-07-11 18:17:29,534 DEBUG [Listener at localhost/41775-EventThread] zookeeper.ZKWatcher(600): regionserver:37037-0x101559aa18f0002, quorum=127.0.0.1:50731, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-11 18:17:29,535 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@712ed415] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@712ed415 2023-07-11 18:17:29,535 INFO [Listener at localhost/41775] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-11 18:17:29,535 WARN [Listener at localhost/41775] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-11 18:17:29,538 INFO [Listener at localhost/41775] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 18:17:29,641 WARN [BP-1290189580-172.31.14.131-1689099443058 heartbeating to localhost/127.0.0.1:43601] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-11 18:17:29,642 WARN [BP-1290189580-172.31.14.131-1689099443058 heartbeating to localhost/127.0.0.1:43601] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1290189580-172.31.14.131-1689099443058 (Datanode Uuid 87545289-74e9-49cc-8704-64bb6212230f) service to localhost/127.0.0.1:43601 2023-07-11 18:17:29,642 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/dfs/data/data5/current/BP-1290189580-172.31.14.131-1689099443058] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 18:17:29,642 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/dfs/data/data6/current/BP-1290189580-172.31.14.131-1689099443058] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 18:17:29,643 WARN [Listener at localhost/41775] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-11 18:17:29,646 INFO [Listener at localhost/41775] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 18:17:29,750 WARN [BP-1290189580-172.31.14.131-1689099443058 heartbeating to localhost/127.0.0.1:43601] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-11 18:17:29,750 WARN [BP-1290189580-172.31.14.131-1689099443058 heartbeating to localhost/127.0.0.1:43601] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1290189580-172.31.14.131-1689099443058 (Datanode Uuid 2916cb80-a26a-4bb5-9342-deefdd8add05) service to localhost/127.0.0.1:43601 2023-07-11 18:17:29,750 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/dfs/data/data3/current/BP-1290189580-172.31.14.131-1689099443058] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 18:17:29,751 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/dfs/data/data4/current/BP-1290189580-172.31.14.131-1689099443058] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 18:17:29,751 WARN [Listener at localhost/41775] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-11 18:17:29,754 INFO [Listener at localhost/41775] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 18:17:29,857 WARN [BP-1290189580-172.31.14.131-1689099443058 heartbeating to localhost/127.0.0.1:43601] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-11 18:17:29,857 WARN [BP-1290189580-172.31.14.131-1689099443058 heartbeating to localhost/127.0.0.1:43601] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1290189580-172.31.14.131-1689099443058 (Datanode Uuid 1bb41cfb-ff7f-48c3-974d-e8fbb34ed2b6) service to localhost/127.0.0.1:43601 2023-07-11 18:17:29,858 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/dfs/data/data1/current/BP-1290189580-172.31.14.131-1689099443058] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 18:17:29,858 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4d05cf4a-1135-d003-b716-7c1c26d54a84/cluster_fdd18593-f757-ebab-1235-6d242e28b0d5/dfs/data/data2/current/BP-1290189580-172.31.14.131-1689099443058] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-11 18:17:29,867 INFO [Listener at localhost/41775] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-11 18:17:29,980 INFO [Listener at localhost/41775] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-11 18:17:30,005 INFO [Listener at localhost/41775] hbase.HBaseTestingUtility(1293): Minicluster is down