2023-07-23 05:10:36,004 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3 2023-07-23 05:10:36,023 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-23 05:10:36,046 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-23 05:10:36,047 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/cluster_fda3020b-7147-ecd1-5879-e2eb4316fd79, deleteOnExit=true 2023-07-23 05:10:36,047 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-23 05:10:36,048 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/test.cache.data in system properties and HBase conf 2023-07-23 05:10:36,049 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/hadoop.tmp.dir in system properties and HBase conf 2023-07-23 05:10:36,049 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/hadoop.log.dir in system properties and HBase conf 2023-07-23 05:10:36,050 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-23 05:10:36,051 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-23 05:10:36,051 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-23 05:10:36,170 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-23 05:10:36,589 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-23 05:10:36,594 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-23 05:10:36,594 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-23 05:10:36,594 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-23 05:10:36,595 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 05:10:36,595 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-23 05:10:36,596 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-23 05:10:36,596 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 05:10:36,596 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 05:10:36,596 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-23 05:10:36,597 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/nfs.dump.dir in system properties and HBase conf 2023-07-23 05:10:36,597 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/java.io.tmpdir in system properties and HBase conf 2023-07-23 05:10:36,598 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 05:10:36,598 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-23 05:10:36,598 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-23 05:10:37,140 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 05:10:37,144 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 05:10:37,432 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-23 05:10:37,609 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-23 05:10:37,622 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 05:10:37,654 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 05:10:37,687 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/java.io.tmpdir/Jetty_localhost_37987_hdfs____.c67rr6/webapp 2023-07-23 05:10:37,827 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37987 2023-07-23 05:10:37,839 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 05:10:37,839 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 05:10:38,332 WARN [Listener at localhost/36893] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 05:10:38,440 WARN [Listener at localhost/36893] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 05:10:38,461 WARN [Listener at localhost/36893] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 05:10:38,467 INFO [Listener at localhost/36893] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 05:10:38,472 INFO [Listener at localhost/36893] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/java.io.tmpdir/Jetty_localhost_45477_datanode____.lub0km/webapp 2023-07-23 05:10:38,575 INFO [Listener at localhost/36893] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45477 2023-07-23 05:10:39,002 WARN [Listener at localhost/41677] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 05:10:39,024 WARN [Listener at localhost/41677] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 05:10:39,028 WARN [Listener at localhost/41677] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 05:10:39,030 INFO [Listener at localhost/41677] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 05:10:39,038 INFO [Listener at localhost/41677] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/java.io.tmpdir/Jetty_localhost_36021_datanode____.ag0mi5/webapp 2023-07-23 05:10:39,178 INFO [Listener at localhost/41677] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36021 2023-07-23 05:10:39,194 WARN [Listener at localhost/41621] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 05:10:39,216 WARN [Listener at localhost/41621] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 05:10:39,221 WARN [Listener at localhost/41621] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 05:10:39,224 INFO [Listener at localhost/41621] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 05:10:39,230 INFO [Listener at localhost/41621] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/java.io.tmpdir/Jetty_localhost_46561_datanode____w4e9xx/webapp 2023-07-23 05:10:39,356 INFO [Listener at localhost/41621] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46561 2023-07-23 05:10:39,377 WARN [Listener at localhost/44477] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 05:10:39,568 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x30d343ba5b47c20a: Processing first storage report for DS-cdb3efdb-89af-4924-a22c-6cfcc4bbae9b from datanode 6caa7b21-cb48-49f8-bb38-712cb611ee48 2023-07-23 05:10:39,570 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x30d343ba5b47c20a: from storage DS-cdb3efdb-89af-4924-a22c-6cfcc4bbae9b node DatanodeRegistration(127.0.0.1:37209, datanodeUuid=6caa7b21-cb48-49f8-bb38-712cb611ee48, infoPort=37947, infoSecurePort=0, ipcPort=44477, storageInfo=lv=-57;cid=testClusterID;nsid=1720446541;c=1690089037211), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-23 05:10:39,570 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe91d8ea658129b84: Processing first storage report for DS-bf1a56e0-d033-40d3-9e0b-ba7765a24a06 from datanode 86f67442-7973-4a1c-bf80-2134abfef945 2023-07-23 05:10:39,570 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe91d8ea658129b84: from storage DS-bf1a56e0-d033-40d3-9e0b-ba7765a24a06 node DatanodeRegistration(127.0.0.1:39809, datanodeUuid=86f67442-7973-4a1c-bf80-2134abfef945, infoPort=41839, infoSecurePort=0, ipcPort=41677, storageInfo=lv=-57;cid=testClusterID;nsid=1720446541;c=1690089037211), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 05:10:39,570 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x44decf13f52d6277: Processing first storage report for DS-5b9ad8eb-3fe5-4ac8-8b1f-308316eed459 from datanode 1df2bbdc-3ed1-47fb-8373-7429a6af5df3 2023-07-23 05:10:39,571 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x44decf13f52d6277: from storage DS-5b9ad8eb-3fe5-4ac8-8b1f-308316eed459 node DatanodeRegistration(127.0.0.1:44583, datanodeUuid=1df2bbdc-3ed1-47fb-8373-7429a6af5df3, infoPort=34739, infoSecurePort=0, ipcPort=41621, storageInfo=lv=-57;cid=testClusterID;nsid=1720446541;c=1690089037211), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-23 05:10:39,571 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x30d343ba5b47c20a: Processing first storage report for DS-fdd962ca-d872-45b7-be5d-44ddec4b5314 from datanode 6caa7b21-cb48-49f8-bb38-712cb611ee48 2023-07-23 05:10:39,571 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x30d343ba5b47c20a: from storage DS-fdd962ca-d872-45b7-be5d-44ddec4b5314 node DatanodeRegistration(127.0.0.1:37209, datanodeUuid=6caa7b21-cb48-49f8-bb38-712cb611ee48, infoPort=37947, infoSecurePort=0, ipcPort=44477, storageInfo=lv=-57;cid=testClusterID;nsid=1720446541;c=1690089037211), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 05:10:39,571 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xe91d8ea658129b84: Processing first storage report for DS-4613482c-dc96-4a6a-a9e6-b4fb47118719 from datanode 86f67442-7973-4a1c-bf80-2134abfef945 2023-07-23 05:10:39,571 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xe91d8ea658129b84: from storage DS-4613482c-dc96-4a6a-a9e6-b4fb47118719 node DatanodeRegistration(127.0.0.1:39809, datanodeUuid=86f67442-7973-4a1c-bf80-2134abfef945, infoPort=41839, infoSecurePort=0, ipcPort=41677, storageInfo=lv=-57;cid=testClusterID;nsid=1720446541;c=1690089037211), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 05:10:39,572 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x44decf13f52d6277: Processing first storage report for DS-a763e4f6-9eee-48a8-b500-f259aa09409a from datanode 1df2bbdc-3ed1-47fb-8373-7429a6af5df3 2023-07-23 05:10:39,572 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x44decf13f52d6277: from storage DS-a763e4f6-9eee-48a8-b500-f259aa09409a node DatanodeRegistration(127.0.0.1:44583, datanodeUuid=1df2bbdc-3ed1-47fb-8373-7429a6af5df3, infoPort=34739, infoSecurePort=0, ipcPort=41621, storageInfo=lv=-57;cid=testClusterID;nsid=1720446541;c=1690089037211), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 05:10:39,795 DEBUG [Listener at localhost/44477] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3 2023-07-23 05:10:39,880 INFO [Listener at localhost/44477] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/cluster_fda3020b-7147-ecd1-5879-e2eb4316fd79/zookeeper_0, clientPort=63392, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/cluster_fda3020b-7147-ecd1-5879-e2eb4316fd79/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/cluster_fda3020b-7147-ecd1-5879-e2eb4316fd79/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-23 05:10:39,893 INFO [Listener at localhost/44477] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=63392 2023-07-23 05:10:39,901 INFO [Listener at localhost/44477] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:10:39,903 INFO [Listener at localhost/44477] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:10:40,588 INFO [Listener at localhost/44477] util.FSUtils(471): Created version file at hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04 with version=8 2023-07-23 05:10:40,589 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/hbase-staging 2023-07-23 05:10:40,600 DEBUG [Listener at localhost/44477] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-23 05:10:40,600 DEBUG [Listener at localhost/44477] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-23 05:10:40,601 DEBUG [Listener at localhost/44477] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-23 05:10:40,601 DEBUG [Listener at localhost/44477] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-23 05:10:40,966 INFO [Listener at localhost/44477] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-23 05:10:41,515 INFO [Listener at localhost/44477] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 05:10:41,554 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:10:41,555 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 05:10:41,555 INFO [Listener at localhost/44477] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 05:10:41,555 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:10:41,555 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 05:10:41,755 INFO [Listener at localhost/44477] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 05:10:41,861 DEBUG [Listener at localhost/44477] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-23 05:10:41,991 INFO [Listener at localhost/44477] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37433 2023-07-23 05:10:42,007 INFO [Listener at localhost/44477] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:10:42,009 INFO [Listener at localhost/44477] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:10:42,038 INFO [Listener at localhost/44477] zookeeper.RecoverableZooKeeper(93): Process identifier=master:37433 connecting to ZooKeeper ensemble=127.0.0.1:63392 2023-07-23 05:10:42,088 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:374330x0, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 05:10:42,093 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:37433-0x1019096aaec0000 connected 2023-07-23 05:10:42,154 DEBUG [Listener at localhost/44477] zookeeper.ZKUtil(164): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 05:10:42,155 DEBUG [Listener at localhost/44477] zookeeper.ZKUtil(164): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:10:42,160 DEBUG [Listener at localhost/44477] zookeeper.ZKUtil(164): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 05:10:42,174 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37433 2023-07-23 05:10:42,178 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37433 2023-07-23 05:10:42,181 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37433 2023-07-23 05:10:42,183 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37433 2023-07-23 05:10:42,186 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37433 2023-07-23 05:10:42,232 INFO [Listener at localhost/44477] log.Log(170): Logging initialized @7072ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-23 05:10:42,384 INFO [Listener at localhost/44477] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 05:10:42,384 INFO [Listener at localhost/44477] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 05:10:42,385 INFO [Listener at localhost/44477] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 05:10:42,388 INFO [Listener at localhost/44477] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-23 05:10:42,388 INFO [Listener at localhost/44477] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 05:10:42,388 INFO [Listener at localhost/44477] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 05:10:42,393 INFO [Listener at localhost/44477] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 05:10:42,461 INFO [Listener at localhost/44477] http.HttpServer(1146): Jetty bound to port 37311 2023-07-23 05:10:42,463 INFO [Listener at localhost/44477] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 05:10:42,496 INFO [Listener at localhost/44477] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:10:42,499 INFO [Listener at localhost/44477] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4ad2bb29{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/hadoop.log.dir/,AVAILABLE} 2023-07-23 05:10:42,500 INFO [Listener at localhost/44477] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:10:42,500 INFO [Listener at localhost/44477] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2c64b740{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 05:10:42,676 INFO [Listener at localhost/44477] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 05:10:42,688 INFO [Listener at localhost/44477] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 05:10:42,689 INFO [Listener at localhost/44477] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 05:10:42,690 INFO [Listener at localhost/44477] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 05:10:42,697 INFO [Listener at localhost/44477] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:10:42,727 INFO [Listener at localhost/44477] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2b2c25c2{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/java.io.tmpdir/jetty-0_0_0_0-37311-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4491037356729646831/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 05:10:42,742 INFO [Listener at localhost/44477] server.AbstractConnector(333): Started ServerConnector@35ac267a{HTTP/1.1, (http/1.1)}{0.0.0.0:37311} 2023-07-23 05:10:42,742 INFO [Listener at localhost/44477] server.Server(415): Started @7582ms 2023-07-23 05:10:42,746 INFO [Listener at localhost/44477] master.HMaster(444): hbase.rootdir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04, hbase.cluster.distributed=false 2023-07-23 05:10:42,836 INFO [Listener at localhost/44477] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 05:10:42,836 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:10:42,837 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 05:10:42,837 INFO [Listener at localhost/44477] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 05:10:42,837 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:10:42,837 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 05:10:42,845 INFO [Listener at localhost/44477] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 05:10:42,849 INFO [Listener at localhost/44477] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45681 2023-07-23 05:10:42,853 INFO [Listener at localhost/44477] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 05:10:42,860 DEBUG [Listener at localhost/44477] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 05:10:42,862 INFO [Listener at localhost/44477] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:10:42,863 INFO [Listener at localhost/44477] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:10:42,865 INFO [Listener at localhost/44477] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45681 connecting to ZooKeeper ensemble=127.0.0.1:63392 2023-07-23 05:10:42,873 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:456810x0, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 05:10:42,875 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45681-0x1019096aaec0001 connected 2023-07-23 05:10:42,875 DEBUG [Listener at localhost/44477] zookeeper.ZKUtil(164): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 05:10:42,877 DEBUG [Listener at localhost/44477] zookeeper.ZKUtil(164): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:10:42,878 DEBUG [Listener at localhost/44477] zookeeper.ZKUtil(164): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 05:10:42,879 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45681 2023-07-23 05:10:42,879 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45681 2023-07-23 05:10:42,880 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45681 2023-07-23 05:10:42,886 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45681 2023-07-23 05:10:42,887 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45681 2023-07-23 05:10:42,889 INFO [Listener at localhost/44477] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 05:10:42,889 INFO [Listener at localhost/44477] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 05:10:42,889 INFO [Listener at localhost/44477] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 05:10:42,891 INFO [Listener at localhost/44477] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 05:10:42,891 INFO [Listener at localhost/44477] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 05:10:42,891 INFO [Listener at localhost/44477] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 05:10:42,891 INFO [Listener at localhost/44477] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 05:10:42,894 INFO [Listener at localhost/44477] http.HttpServer(1146): Jetty bound to port 34293 2023-07-23 05:10:42,894 INFO [Listener at localhost/44477] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 05:10:42,903 INFO [Listener at localhost/44477] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:10:42,903 INFO [Listener at localhost/44477] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@10630bfe{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/hadoop.log.dir/,AVAILABLE} 2023-07-23 05:10:42,903 INFO [Listener at localhost/44477] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:10:42,904 INFO [Listener at localhost/44477] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@282c1c14{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 05:10:43,056 INFO [Listener at localhost/44477] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 05:10:43,058 INFO [Listener at localhost/44477] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 05:10:43,058 INFO [Listener at localhost/44477] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 05:10:43,058 INFO [Listener at localhost/44477] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 05:10:43,059 INFO [Listener at localhost/44477] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:10:43,063 INFO [Listener at localhost/44477] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6770849c{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/java.io.tmpdir/jetty-0_0_0_0-34293-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2214306015454279024/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:10:43,064 INFO [Listener at localhost/44477] server.AbstractConnector(333): Started ServerConnector@24b5075d{HTTP/1.1, (http/1.1)}{0.0.0.0:34293} 2023-07-23 05:10:43,065 INFO [Listener at localhost/44477] server.Server(415): Started @7904ms 2023-07-23 05:10:43,079 INFO [Listener at localhost/44477] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 05:10:43,079 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:10:43,079 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 05:10:43,080 INFO [Listener at localhost/44477] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 05:10:43,080 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:10:43,080 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 05:10:43,080 INFO [Listener at localhost/44477] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 05:10:43,085 INFO [Listener at localhost/44477] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37441 2023-07-23 05:10:43,086 INFO [Listener at localhost/44477] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 05:10:43,087 DEBUG [Listener at localhost/44477] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 05:10:43,088 INFO [Listener at localhost/44477] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:10:43,090 INFO [Listener at localhost/44477] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:10:43,092 INFO [Listener at localhost/44477] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37441 connecting to ZooKeeper ensemble=127.0.0.1:63392 2023-07-23 05:10:43,105 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:374410x0, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 05:10:43,108 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37441-0x1019096aaec0002 connected 2023-07-23 05:10:43,108 DEBUG [Listener at localhost/44477] zookeeper.ZKUtil(164): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 05:10:43,109 DEBUG [Listener at localhost/44477] zookeeper.ZKUtil(164): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:10:43,110 DEBUG [Listener at localhost/44477] zookeeper.ZKUtil(164): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 05:10:43,114 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37441 2023-07-23 05:10:43,115 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37441 2023-07-23 05:10:43,116 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37441 2023-07-23 05:10:43,116 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37441 2023-07-23 05:10:43,118 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37441 2023-07-23 05:10:43,121 INFO [Listener at localhost/44477] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 05:10:43,122 INFO [Listener at localhost/44477] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 05:10:43,122 INFO [Listener at localhost/44477] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 05:10:43,123 INFO [Listener at localhost/44477] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 05:10:43,123 INFO [Listener at localhost/44477] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 05:10:43,123 INFO [Listener at localhost/44477] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 05:10:43,123 INFO [Listener at localhost/44477] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 05:10:43,124 INFO [Listener at localhost/44477] http.HttpServer(1146): Jetty bound to port 33889 2023-07-23 05:10:43,124 INFO [Listener at localhost/44477] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 05:10:43,138 INFO [Listener at localhost/44477] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:10:43,139 INFO [Listener at localhost/44477] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@539fa719{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/hadoop.log.dir/,AVAILABLE} 2023-07-23 05:10:43,141 INFO [Listener at localhost/44477] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:10:43,141 INFO [Listener at localhost/44477] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2c8a8cb2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 05:10:43,280 INFO [Listener at localhost/44477] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 05:10:43,281 INFO [Listener at localhost/44477] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 05:10:43,282 INFO [Listener at localhost/44477] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 05:10:43,282 INFO [Listener at localhost/44477] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 05:10:43,283 INFO [Listener at localhost/44477] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:10:43,284 INFO [Listener at localhost/44477] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@46ffcd75{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/java.io.tmpdir/jetty-0_0_0_0-33889-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4321239374108352831/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:10:43,286 INFO [Listener at localhost/44477] server.AbstractConnector(333): Started ServerConnector@7f6e0343{HTTP/1.1, (http/1.1)}{0.0.0.0:33889} 2023-07-23 05:10:43,286 INFO [Listener at localhost/44477] server.Server(415): Started @8126ms 2023-07-23 05:10:43,305 INFO [Listener at localhost/44477] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 05:10:43,305 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:10:43,305 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 05:10:43,306 INFO [Listener at localhost/44477] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 05:10:43,306 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:10:43,306 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 05:10:43,306 INFO [Listener at localhost/44477] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 05:10:43,308 INFO [Listener at localhost/44477] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46173 2023-07-23 05:10:43,309 INFO [Listener at localhost/44477] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 05:10:43,316 DEBUG [Listener at localhost/44477] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 05:10:43,317 INFO [Listener at localhost/44477] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:10:43,320 INFO [Listener at localhost/44477] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:10:43,322 INFO [Listener at localhost/44477] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46173 connecting to ZooKeeper ensemble=127.0.0.1:63392 2023-07-23 05:10:43,331 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:461730x0, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 05:10:43,333 DEBUG [Listener at localhost/44477] zookeeper.ZKUtil(164): regionserver:461730x0, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 05:10:43,333 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46173-0x1019096aaec0003 connected 2023-07-23 05:10:43,334 DEBUG [Listener at localhost/44477] zookeeper.ZKUtil(164): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:10:43,335 DEBUG [Listener at localhost/44477] zookeeper.ZKUtil(164): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 05:10:43,336 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46173 2023-07-23 05:10:43,340 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46173 2023-07-23 05:10:43,341 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46173 2023-07-23 05:10:43,344 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46173 2023-07-23 05:10:43,344 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46173 2023-07-23 05:10:43,347 INFO [Listener at localhost/44477] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 05:10:43,347 INFO [Listener at localhost/44477] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 05:10:43,348 INFO [Listener at localhost/44477] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 05:10:43,348 INFO [Listener at localhost/44477] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 05:10:43,348 INFO [Listener at localhost/44477] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 05:10:43,348 INFO [Listener at localhost/44477] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 05:10:43,349 INFO [Listener at localhost/44477] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 05:10:43,350 INFO [Listener at localhost/44477] http.HttpServer(1146): Jetty bound to port 37579 2023-07-23 05:10:43,350 INFO [Listener at localhost/44477] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 05:10:43,358 INFO [Listener at localhost/44477] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:10:43,359 INFO [Listener at localhost/44477] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@67c48ba9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/hadoop.log.dir/,AVAILABLE} 2023-07-23 05:10:43,359 INFO [Listener at localhost/44477] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:10:43,360 INFO [Listener at localhost/44477] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5109bb49{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 05:10:43,488 INFO [Listener at localhost/44477] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 05:10:43,489 INFO [Listener at localhost/44477] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 05:10:43,490 INFO [Listener at localhost/44477] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 05:10:43,490 INFO [Listener at localhost/44477] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 05:10:43,491 INFO [Listener at localhost/44477] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:10:43,493 INFO [Listener at localhost/44477] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2f709731{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/java.io.tmpdir/jetty-0_0_0_0-37579-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4891853346310070352/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:10:43,494 INFO [Listener at localhost/44477] server.AbstractConnector(333): Started ServerConnector@206a46fd{HTTP/1.1, (http/1.1)}{0.0.0.0:37579} 2023-07-23 05:10:43,495 INFO [Listener at localhost/44477] server.Server(415): Started @8334ms 2023-07-23 05:10:43,503 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 05:10:43,512 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@293472ac{HTTP/1.1, (http/1.1)}{0.0.0.0:40483} 2023-07-23 05:10:43,513 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8352ms 2023-07-23 05:10:43,513 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,37433,1690089040778 2023-07-23 05:10:43,526 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 05:10:43,527 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,37433,1690089040778 2023-07-23 05:10:43,547 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 05:10:43,547 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 05:10:43,547 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 05:10:43,547 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 05:10:43,549 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:10:43,550 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 05:10:43,552 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,37433,1690089040778 from backup master directory 2023-07-23 05:10:43,552 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 05:10:43,557 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,37433,1690089040778 2023-07-23 05:10:43,557 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 05:10:43,558 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 05:10:43,558 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,37433,1690089040778 2023-07-23 05:10:43,561 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-23 05:10:43,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-23 05:10:43,687 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/hbase.id with ID: 91356a8d-4149-4a07-ac08-30b1745b5070 2023-07-23 05:10:43,745 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:10:43,764 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:10:43,831 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x4be9f084 to 127.0.0.1:63392 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:10:43,857 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3556c7fa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:10:43,883 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 05:10:43,886 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-23 05:10:43,913 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-23 05:10:43,913 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-23 05:10:43,916 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-23 05:10:43,922 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-23 05:10:43,923 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 05:10:43,971 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/MasterData/data/master/store-tmp 2023-07-23 05:10:44,036 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:44,036 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 05:10:44,036 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:10:44,036 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:10:44,036 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 05:10:44,036 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:10:44,037 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:10:44,037 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 05:10:44,039 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/MasterData/WALs/jenkins-hbase4.apache.org,37433,1690089040778 2023-07-23 05:10:44,066 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37433%2C1690089040778, suffix=, logDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/MasterData/WALs/jenkins-hbase4.apache.org,37433,1690089040778, archiveDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/MasterData/oldWALs, maxLogs=10 2023-07-23 05:10:44,172 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39809,DS-bf1a56e0-d033-40d3-9e0b-ba7765a24a06,DISK] 2023-07-23 05:10:44,182 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44583,DS-5b9ad8eb-3fe5-4ac8-8b1f-308316eed459,DISK] 2023-07-23 05:10:44,171 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37209,DS-cdb3efdb-89af-4924-a22c-6cfcc4bbae9b,DISK] 2023-07-23 05:10:44,191 DEBUG [RS-EventLoopGroup-5-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-23 05:10:44,283 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/MasterData/WALs/jenkins-hbase4.apache.org,37433,1690089040778/jenkins-hbase4.apache.org%2C37433%2C1690089040778.1690089044079 2023-07-23 05:10:44,286 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37209,DS-cdb3efdb-89af-4924-a22c-6cfcc4bbae9b,DISK], DatanodeInfoWithStorage[127.0.0.1:39809,DS-bf1a56e0-d033-40d3-9e0b-ba7765a24a06,DISK], DatanodeInfoWithStorage[127.0.0.1:44583,DS-5b9ad8eb-3fe5-4ac8-8b1f-308316eed459,DISK]] 2023-07-23 05:10:44,287 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:10:44,288 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:44,291 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 05:10:44,293 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 05:10:44,374 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-23 05:10:44,382 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-23 05:10:44,422 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-23 05:10:44,437 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:44,442 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 05:10:44,445 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 05:10:44,465 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 05:10:44,470 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:10:44,471 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10025020800, jitterRate=-0.06634718179702759}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:44,472 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 05:10:44,473 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-23 05:10:44,508 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-23 05:10:44,508 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-23 05:10:44,512 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-23 05:10:44,515 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-23 05:10:44,563 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 48 msec 2023-07-23 05:10:44,564 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-23 05:10:44,591 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-23 05:10:44,596 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-23 05:10:44,604 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-23 05:10:44,610 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-23 05:10:44,614 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-23 05:10:44,617 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:10:44,618 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-23 05:10:44,619 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-23 05:10:44,633 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-23 05:10:44,638 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 05:10:44,638 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 05:10:44,638 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 05:10:44,638 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 05:10:44,638 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:10:44,639 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,37433,1690089040778, sessionid=0x1019096aaec0000, setting cluster-up flag (Was=false) 2023-07-23 05:10:44,661 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:10:44,668 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-23 05:10:44,670 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37433,1690089040778 2023-07-23 05:10:44,676 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:10:44,683 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-23 05:10:44,685 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,37433,1690089040778 2023-07-23 05:10:44,688 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.hbase-snapshot/.tmp 2023-07-23 05:10:44,704 INFO [RS:1;jenkins-hbase4:37441] regionserver.HRegionServer(951): ClusterId : 91356a8d-4149-4a07-ac08-30b1745b5070 2023-07-23 05:10:44,704 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(951): ClusterId : 91356a8d-4149-4a07-ac08-30b1745b5070 2023-07-23 05:10:44,704 INFO [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(951): ClusterId : 91356a8d-4149-4a07-ac08-30b1745b5070 2023-07-23 05:10:44,712 DEBUG [RS:0;jenkins-hbase4:45681] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 05:10:44,712 DEBUG [RS:2;jenkins-hbase4:46173] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 05:10:44,712 DEBUG [RS:1;jenkins-hbase4:37441] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 05:10:44,720 DEBUG [RS:0;jenkins-hbase4:45681] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 05:10:44,720 DEBUG [RS:0;jenkins-hbase4:45681] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 05:10:44,720 DEBUG [RS:1;jenkins-hbase4:37441] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 05:10:44,720 DEBUG [RS:2;jenkins-hbase4:46173] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 05:10:44,720 DEBUG [RS:2;jenkins-hbase4:46173] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 05:10:44,720 DEBUG [RS:1;jenkins-hbase4:37441] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 05:10:44,725 DEBUG [RS:0;jenkins-hbase4:45681] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 05:10:44,726 DEBUG [RS:2;jenkins-hbase4:46173] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 05:10:44,726 DEBUG [RS:1;jenkins-hbase4:37441] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 05:10:44,727 DEBUG [RS:0;jenkins-hbase4:45681] zookeeper.ReadOnlyZKClient(139): Connect 0x7aa6fe8f to 127.0.0.1:63392 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:10:44,729 DEBUG [RS:2;jenkins-hbase4:46173] zookeeper.ReadOnlyZKClient(139): Connect 0x0a76d190 to 127.0.0.1:63392 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:10:44,729 DEBUG [RS:1;jenkins-hbase4:37441] zookeeper.ReadOnlyZKClient(139): Connect 0x600bfc99 to 127.0.0.1:63392 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:10:44,739 DEBUG [RS:2;jenkins-hbase4:46173] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4b12eae, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:10:44,741 DEBUG [RS:2;jenkins-hbase4:46173] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@8ebae0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 05:10:44,746 DEBUG [RS:1;jenkins-hbase4:37441] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@cf8c282, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:10:44,746 DEBUG [RS:1;jenkins-hbase4:37441] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@18167f32, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 05:10:44,747 DEBUG [RS:0;jenkins-hbase4:45681] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@434ca962, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:10:44,747 DEBUG [RS:0;jenkins-hbase4:45681] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@40f05887, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 05:10:44,777 DEBUG [RS:2;jenkins-hbase4:46173] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:46173 2023-07-23 05:10:44,783 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:45681 2023-07-23 05:10:44,786 INFO [RS:2;jenkins-hbase4:46173] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 05:10:44,787 INFO [RS:2;jenkins-hbase4:46173] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 05:10:44,786 INFO [RS:0;jenkins-hbase4:45681] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 05:10:44,787 INFO [RS:0;jenkins-hbase4:45681] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 05:10:44,787 DEBUG [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 05:10:44,787 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 05:10:44,788 DEBUG [RS:1;jenkins-hbase4:37441] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:37441 2023-07-23 05:10:44,788 INFO [RS:1;jenkins-hbase4:37441] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 05:10:44,788 INFO [RS:1;jenkins-hbase4:37441] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 05:10:44,788 DEBUG [RS:1;jenkins-hbase4:37441] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 05:10:44,791 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37433,1690089040778 with isa=jenkins-hbase4.apache.org/172.31.14.131:45681, startcode=1690089042835 2023-07-23 05:10:44,791 INFO [RS:1;jenkins-hbase4:37441] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37433,1690089040778 with isa=jenkins-hbase4.apache.org/172.31.14.131:37441, startcode=1690089043078 2023-07-23 05:10:44,791 INFO [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37433,1690089040778 with isa=jenkins-hbase4.apache.org/172.31.14.131:46173, startcode=1690089043304 2023-07-23 05:10:44,818 DEBUG [RS:2;jenkins-hbase4:46173] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 05:10:44,818 DEBUG [RS:1;jenkins-hbase4:37441] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 05:10:44,818 DEBUG [RS:0;jenkins-hbase4:45681] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 05:10:44,820 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-23 05:10:44,834 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-23 05:10:44,836 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37433,1690089040778] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 05:10:44,839 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-23 05:10:44,840 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-23 05:10:44,888 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37801, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 05:10:44,891 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43607, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 05:10:44,888 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58645, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 05:10:44,911 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:10:44,926 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:10:44,927 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:10:44,953 DEBUG [RS:1;jenkins-hbase4:37441] regionserver.HRegionServer(2830): Master is not running yet 2023-07-23 05:10:44,953 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(2830): Master is not running yet 2023-07-23 05:10:44,953 DEBUG [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(2830): Master is not running yet 2023-07-23 05:10:44,953 WARN [RS:1;jenkins-hbase4:37441] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-23 05:10:44,953 WARN [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-23 05:10:44,953 WARN [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-23 05:10:44,959 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-23 05:10:45,013 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 05:10:45,020 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 05:10:45,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 05:10:45,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 05:10:45,024 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 05:10:45,024 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 05:10:45,024 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 05:10:45,024 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 05:10:45,024 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-23 05:10:45,024 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,024 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 05:10:45,025 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,039 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690089075039 2023-07-23 05:10:45,042 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-23 05:10:45,046 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-23 05:10:45,046 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 05:10:45,047 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-23 05:10:45,050 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 05:10:45,054 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37433,1690089040778 with isa=jenkins-hbase4.apache.org/172.31.14.131:45681, startcode=1690089042835 2023-07-23 05:10:45,054 INFO [RS:1;jenkins-hbase4:37441] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37433,1690089040778 with isa=jenkins-hbase4.apache.org/172.31.14.131:37441, startcode=1690089043078 2023-07-23 05:10:45,055 INFO [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37433,1690089040778 with isa=jenkins-hbase4.apache.org/172.31.14.131:46173, startcode=1690089043304 2023-07-23 05:10:45,056 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:10:45,056 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-23 05:10:45,057 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:10:45,057 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-23 05:10:45,058 DEBUG [RS:1;jenkins-hbase4:37441] regionserver.HRegionServer(2830): Master is not running yet 2023-07-23 05:10:45,058 WARN [RS:1;jenkins-hbase4:37441] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-23 05:10:45,058 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-23 05:10:45,058 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:10:45,058 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-23 05:10:45,060 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,061 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(2830): Master is not running yet 2023-07-23 05:10:45,061 DEBUG [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(2830): Master is not running yet 2023-07-23 05:10:45,061 WARN [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-23 05:10:45,061 WARN [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 200 ms and then retrying. 2023-07-23 05:10:45,062 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-23 05:10:45,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-23 05:10:45,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-23 05:10:45,067 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-23 05:10:45,067 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-23 05:10:45,069 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690089045069,5,FailOnTimeoutGroup] 2023-07-23 05:10:45,069 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690089045069,5,FailOnTimeoutGroup] 2023-07-23 05:10:45,069 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,069 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-23 05:10:45,071 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,071 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,134 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 05:10:45,135 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 05:10:45,135 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04 2023-07-23 05:10:45,162 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:45,165 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 05:10:45,169 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/info 2023-07-23 05:10:45,169 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 05:10:45,170 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:45,171 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 05:10:45,174 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/rep_barrier 2023-07-23 05:10:45,175 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 05:10:45,175 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:45,176 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 05:10:45,178 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/table 2023-07-23 05:10:45,179 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 05:10:45,180 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:45,183 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740 2023-07-23 05:10:45,184 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740 2023-07-23 05:10:45,192 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 05:10:45,194 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 05:10:45,198 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:10:45,199 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9967127520, jitterRate=-0.07173891365528107}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 05:10:45,199 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 05:10:45,200 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 05:10:45,200 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 05:10:45,200 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 05:10:45,200 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 05:10:45,200 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 05:10:45,201 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 05:10:45,201 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 05:10:45,208 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 05:10:45,208 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-23 05:10:45,219 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-23 05:10:45,231 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-23 05:10:45,236 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-23 05:10:45,259 INFO [RS:1;jenkins-hbase4:37441] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37433,1690089040778 with isa=jenkins-hbase4.apache.org/172.31.14.131:37441, startcode=1690089043078 2023-07-23 05:10:45,262 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37433,1690089040778 with isa=jenkins-hbase4.apache.org/172.31.14.131:45681, startcode=1690089042835 2023-07-23 05:10:45,262 INFO [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37433,1690089040778 with isa=jenkins-hbase4.apache.org/172.31.14.131:46173, startcode=1690089043304 2023-07-23 05:10:45,264 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37433] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:45,265 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37433,1690089040778] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 05:10:45,266 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37433,1690089040778] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-23 05:10:45,270 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37433] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:45,270 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37433,1690089040778] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 05:10:45,270 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37433,1690089040778] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-23 05:10:45,271 DEBUG [RS:1;jenkins-hbase4:37441] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04 2023-07-23 05:10:45,271 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37433] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:45,271 DEBUG [RS:1;jenkins-hbase4:37441] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36893 2023-07-23 05:10:45,271 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37433,1690089040778] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 05:10:45,272 DEBUG [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04 2023-07-23 05:10:45,272 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37433,1690089040778] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-23 05:10:45,271 DEBUG [RS:1;jenkins-hbase4:37441] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37311 2023-07-23 05:10:45,272 DEBUG [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36893 2023-07-23 05:10:45,272 DEBUG [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37311 2023-07-23 05:10:45,273 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04 2023-07-23 05:10:45,273 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36893 2023-07-23 05:10:45,273 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37311 2023-07-23 05:10:45,281 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:10:45,283 DEBUG [RS:1;jenkins-hbase4:37441] zookeeper.ZKUtil(162): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:45,283 WARN [RS:1;jenkins-hbase4:37441] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 05:10:45,283 INFO [RS:1;jenkins-hbase4:37441] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 05:10:45,283 DEBUG [RS:0;jenkins-hbase4:45681] zookeeper.ZKUtil(162): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:45,283 DEBUG [RS:2;jenkins-hbase4:46173] zookeeper.ZKUtil(162): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:45,283 DEBUG [RS:1;jenkins-hbase4:37441] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/WALs/jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:45,284 WARN [RS:2;jenkins-hbase4:46173] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 05:10:45,284 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45681,1690089042835] 2023-07-23 05:10:45,283 WARN [RS:0;jenkins-hbase4:45681] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 05:10:45,284 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46173,1690089043304] 2023-07-23 05:10:45,284 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37441,1690089043078] 2023-07-23 05:10:45,284 INFO [RS:2;jenkins-hbase4:46173] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 05:10:45,284 INFO [RS:0;jenkins-hbase4:45681] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 05:10:45,285 DEBUG [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/WALs/jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:45,285 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/WALs/jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:45,300 DEBUG [RS:0;jenkins-hbase4:45681] zookeeper.ZKUtil(162): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:45,300 DEBUG [RS:2;jenkins-hbase4:46173] zookeeper.ZKUtil(162): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:45,300 DEBUG [RS:1;jenkins-hbase4:37441] zookeeper.ZKUtil(162): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:45,300 DEBUG [RS:0;jenkins-hbase4:45681] zookeeper.ZKUtil(162): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:45,300 DEBUG [RS:2;jenkins-hbase4:46173] zookeeper.ZKUtil(162): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:45,301 DEBUG [RS:1;jenkins-hbase4:37441] zookeeper.ZKUtil(162): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:45,301 DEBUG [RS:0;jenkins-hbase4:45681] zookeeper.ZKUtil(162): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:45,301 DEBUG [RS:2;jenkins-hbase4:46173] zookeeper.ZKUtil(162): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:45,301 DEBUG [RS:1;jenkins-hbase4:37441] zookeeper.ZKUtil(162): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:45,313 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 05:10:45,313 DEBUG [RS:2;jenkins-hbase4:46173] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 05:10:45,313 DEBUG [RS:1;jenkins-hbase4:37441] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 05:10:45,323 INFO [RS:0;jenkins-hbase4:45681] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 05:10:45,323 INFO [RS:1;jenkins-hbase4:37441] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 05:10:45,324 INFO [RS:2;jenkins-hbase4:46173] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 05:10:45,346 INFO [RS:0;jenkins-hbase4:45681] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 05:10:45,346 INFO [RS:2;jenkins-hbase4:46173] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 05:10:45,347 INFO [RS:1;jenkins-hbase4:37441] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 05:10:45,351 INFO [RS:0;jenkins-hbase4:45681] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 05:10:45,351 INFO [RS:1;jenkins-hbase4:37441] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 05:10:45,352 INFO [RS:0;jenkins-hbase4:45681] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,351 INFO [RS:2;jenkins-hbase4:46173] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 05:10:45,352 INFO [RS:1;jenkins-hbase4:37441] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,352 INFO [RS:2;jenkins-hbase4:46173] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,353 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 05:10:45,354 INFO [RS:1;jenkins-hbase4:37441] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 05:10:45,354 INFO [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 05:10:45,362 INFO [RS:0;jenkins-hbase4:45681] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,362 INFO [RS:1;jenkins-hbase4:37441] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,362 INFO [RS:2;jenkins-hbase4:46173] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,363 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,363 DEBUG [RS:1;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,363 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,363 DEBUG [RS:1;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,363 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,363 DEBUG [RS:2;jenkins-hbase4:46173] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,364 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,364 DEBUG [RS:2;jenkins-hbase4:46173] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,364 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,364 DEBUG [RS:2;jenkins-hbase4:46173] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,364 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 05:10:45,364 DEBUG [RS:2;jenkins-hbase4:46173] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,364 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,364 DEBUG [RS:2;jenkins-hbase4:46173] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,364 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,364 DEBUG [RS:2;jenkins-hbase4:46173] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 05:10:45,363 DEBUG [RS:1;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,364 DEBUG [RS:2;jenkins-hbase4:46173] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,364 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,364 DEBUG [RS:2;jenkins-hbase4:46173] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,364 DEBUG [RS:0;jenkins-hbase4:45681] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,364 DEBUG [RS:2;jenkins-hbase4:46173] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,364 DEBUG [RS:1;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,365 DEBUG [RS:2;jenkins-hbase4:46173] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,365 DEBUG [RS:1;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,365 DEBUG [RS:1;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 05:10:45,366 DEBUG [RS:1;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,366 INFO [RS:0;jenkins-hbase4:45681] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,366 DEBUG [RS:1;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,366 INFO [RS:0;jenkins-hbase4:45681] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,366 INFO [RS:2;jenkins-hbase4:46173] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,366 INFO [RS:0;jenkins-hbase4:45681] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,366 INFO [RS:2;jenkins-hbase4:46173] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,366 DEBUG [RS:1;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,366 INFO [RS:2;jenkins-hbase4:46173] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,366 DEBUG [RS:1;jenkins-hbase4:37441] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:45,367 INFO [RS:1;jenkins-hbase4:37441] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,367 INFO [RS:1;jenkins-hbase4:37441] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,367 INFO [RS:1;jenkins-hbase4:37441] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,384 INFO [RS:0;jenkins-hbase4:45681] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 05:10:45,384 INFO [RS:2;jenkins-hbase4:46173] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 05:10:45,384 INFO [RS:1;jenkins-hbase4:37441] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 05:10:45,388 INFO [RS:2;jenkins-hbase4:46173] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46173,1690089043304-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,388 INFO [RS:0;jenkins-hbase4:45681] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45681,1690089042835-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,388 INFO [RS:1;jenkins-hbase4:37441] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37441,1690089043078-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,388 DEBUG [jenkins-hbase4:37433] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-23 05:10:45,405 DEBUG [jenkins-hbase4:37433] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:10:45,406 DEBUG [jenkins-hbase4:37433] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:10:45,406 DEBUG [jenkins-hbase4:37433] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:10:45,406 DEBUG [jenkins-hbase4:37433] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:10:45,406 DEBUG [jenkins-hbase4:37433] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:10:45,409 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37441,1690089043078, state=OPENING 2023-07-23 05:10:45,414 INFO [RS:0;jenkins-hbase4:45681] regionserver.Replication(203): jenkins-hbase4.apache.org,45681,1690089042835 started 2023-07-23 05:10:45,414 INFO [RS:1;jenkins-hbase4:37441] regionserver.Replication(203): jenkins-hbase4.apache.org,37441,1690089043078 started 2023-07-23 05:10:45,414 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45681,1690089042835, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45681, sessionid=0x1019096aaec0001 2023-07-23 05:10:45,414 INFO [RS:2;jenkins-hbase4:46173] regionserver.Replication(203): jenkins-hbase4.apache.org,46173,1690089043304 started 2023-07-23 05:10:45,415 INFO [RS:1;jenkins-hbase4:37441] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37441,1690089043078, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37441, sessionid=0x1019096aaec0002 2023-07-23 05:10:45,415 INFO [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46173,1690089043304, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46173, sessionid=0x1019096aaec0003 2023-07-23 05:10:45,415 DEBUG [RS:0;jenkins-hbase4:45681] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 05:10:45,415 DEBUG [RS:1;jenkins-hbase4:37441] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 05:10:45,415 DEBUG [RS:0;jenkins-hbase4:45681] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:45,415 DEBUG [RS:2;jenkins-hbase4:46173] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 05:10:45,416 DEBUG [RS:0;jenkins-hbase4:45681] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45681,1690089042835' 2023-07-23 05:10:45,416 DEBUG [RS:2;jenkins-hbase4:46173] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:45,415 DEBUG [RS:1;jenkins-hbase4:37441] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:45,416 DEBUG [RS:2;jenkins-hbase4:46173] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46173,1690089043304' 2023-07-23 05:10:45,416 DEBUG [RS:2;jenkins-hbase4:46173] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 05:10:45,416 DEBUG [RS:0;jenkins-hbase4:45681] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 05:10:45,416 DEBUG [RS:1;jenkins-hbase4:37441] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37441,1690089043078' 2023-07-23 05:10:45,417 DEBUG [RS:1;jenkins-hbase4:37441] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 05:10:45,417 DEBUG [RS:0;jenkins-hbase4:45681] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 05:10:45,417 DEBUG [RS:2;jenkins-hbase4:46173] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 05:10:45,417 DEBUG [RS:1;jenkins-hbase4:37441] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 05:10:45,418 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-23 05:10:45,418 DEBUG [RS:0;jenkins-hbase4:45681] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 05:10:45,418 DEBUG [RS:1;jenkins-hbase4:37441] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 05:10:45,418 DEBUG [RS:1;jenkins-hbase4:37441] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 05:10:45,418 DEBUG [RS:2;jenkins-hbase4:46173] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 05:10:45,418 DEBUG [RS:1;jenkins-hbase4:37441] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:45,418 DEBUG [RS:0;jenkins-hbase4:45681] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 05:10:45,419 DEBUG [RS:1;jenkins-hbase4:37441] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37441,1690089043078' 2023-07-23 05:10:45,419 DEBUG [RS:1;jenkins-hbase4:37441] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 05:10:45,419 DEBUG [RS:2;jenkins-hbase4:46173] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 05:10:45,419 DEBUG [RS:0;jenkins-hbase4:45681] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:45,419 DEBUG [RS:0;jenkins-hbase4:45681] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45681,1690089042835' 2023-07-23 05:10:45,419 DEBUG [RS:2;jenkins-hbase4:46173] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:45,419 DEBUG [RS:2;jenkins-hbase4:46173] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46173,1690089043304' 2023-07-23 05:10:45,420 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:10:45,420 DEBUG [RS:1;jenkins-hbase4:37441] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 05:10:45,419 DEBUG [RS:0;jenkins-hbase4:45681] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 05:10:45,420 DEBUG [RS:2;jenkins-hbase4:46173] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 05:10:45,420 DEBUG [RS:1;jenkins-hbase4:37441] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 05:10:45,420 INFO [RS:1;jenkins-hbase4:37441] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 05:10:45,420 DEBUG [RS:0;jenkins-hbase4:45681] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 05:10:45,420 INFO [RS:1;jenkins-hbase4:37441] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 05:10:45,420 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 05:10:45,421 DEBUG [RS:2;jenkins-hbase4:46173] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 05:10:45,421 DEBUG [RS:2;jenkins-hbase4:46173] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 05:10:45,421 INFO [RS:2;jenkins-hbase4:46173] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 05:10:45,422 INFO [RS:2;jenkins-hbase4:46173] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 05:10:45,423 DEBUG [RS:0;jenkins-hbase4:45681] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 05:10:45,423 INFO [RS:0;jenkins-hbase4:45681] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 05:10:45,423 INFO [RS:0;jenkins-hbase4:45681] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 05:10:45,425 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37441,1690089043078}] 2023-07-23 05:10:45,533 INFO [RS:0;jenkins-hbase4:45681] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45681%2C1690089042835, suffix=, logDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/WALs/jenkins-hbase4.apache.org,45681,1690089042835, archiveDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/oldWALs, maxLogs=32 2023-07-23 05:10:45,534 INFO [RS:2;jenkins-hbase4:46173] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46173%2C1690089043304, suffix=, logDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/WALs/jenkins-hbase4.apache.org,46173,1690089043304, archiveDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/oldWALs, maxLogs=32 2023-07-23 05:10:45,533 INFO [RS:1;jenkins-hbase4:37441] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37441%2C1690089043078, suffix=, logDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/WALs/jenkins-hbase4.apache.org,37441,1690089043078, archiveDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/oldWALs, maxLogs=32 2023-07-23 05:10:45,564 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37209,DS-cdb3efdb-89af-4924-a22c-6cfcc4bbae9b,DISK] 2023-07-23 05:10:45,564 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39809,DS-bf1a56e0-d033-40d3-9e0b-ba7765a24a06,DISK] 2023-07-23 05:10:45,565 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37209,DS-cdb3efdb-89af-4924-a22c-6cfcc4bbae9b,DISK] 2023-07-23 05:10:45,565 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39809,DS-bf1a56e0-d033-40d3-9e0b-ba7765a24a06,DISK] 2023-07-23 05:10:45,566 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44583,DS-5b9ad8eb-3fe5-4ac8-8b1f-308316eed459,DISK] 2023-07-23 05:10:45,567 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39809,DS-bf1a56e0-d033-40d3-9e0b-ba7765a24a06,DISK] 2023-07-23 05:10:45,567 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44583,DS-5b9ad8eb-3fe5-4ac8-8b1f-308316eed459,DISK] 2023-07-23 05:10:45,567 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37209,DS-cdb3efdb-89af-4924-a22c-6cfcc4bbae9b,DISK] 2023-07-23 05:10:45,568 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44583,DS-5b9ad8eb-3fe5-4ac8-8b1f-308316eed459,DISK] 2023-07-23 05:10:45,580 INFO [RS:0;jenkins-hbase4:45681] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/WALs/jenkins-hbase4.apache.org,45681,1690089042835/jenkins-hbase4.apache.org%2C45681%2C1690089042835.1690089045538 2023-07-23 05:10:45,581 INFO [RS:1;jenkins-hbase4:37441] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/WALs/jenkins-hbase4.apache.org,37441,1690089043078/jenkins-hbase4.apache.org%2C37441%2C1690089043078.1690089045538 2023-07-23 05:10:45,581 DEBUG [RS:0;jenkins-hbase4:45681] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39809,DS-bf1a56e0-d033-40d3-9e0b-ba7765a24a06,DISK], DatanodeInfoWithStorage[127.0.0.1:37209,DS-cdb3efdb-89af-4924-a22c-6cfcc4bbae9b,DISK], DatanodeInfoWithStorage[127.0.0.1:44583,DS-5b9ad8eb-3fe5-4ac8-8b1f-308316eed459,DISK]] 2023-07-23 05:10:45,581 INFO [RS:2;jenkins-hbase4:46173] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/WALs/jenkins-hbase4.apache.org,46173,1690089043304/jenkins-hbase4.apache.org%2C46173%2C1690089043304.1690089045537 2023-07-23 05:10:45,585 DEBUG [RS:1;jenkins-hbase4:37441] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39809,DS-bf1a56e0-d033-40d3-9e0b-ba7765a24a06,DISK], DatanodeInfoWithStorage[127.0.0.1:37209,DS-cdb3efdb-89af-4924-a22c-6cfcc4bbae9b,DISK], DatanodeInfoWithStorage[127.0.0.1:44583,DS-5b9ad8eb-3fe5-4ac8-8b1f-308316eed459,DISK]] 2023-07-23 05:10:45,585 DEBUG [RS:2;jenkins-hbase4:46173] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37209,DS-cdb3efdb-89af-4924-a22c-6cfcc4bbae9b,DISK], DatanodeInfoWithStorage[127.0.0.1:39809,DS-bf1a56e0-d033-40d3-9e0b-ba7765a24a06,DISK], DatanodeInfoWithStorage[127.0.0.1:44583,DS-5b9ad8eb-3fe5-4ac8-8b1f-308316eed459,DISK]] 2023-07-23 05:10:45,610 WARN [ReadOnlyZKClient-127.0.0.1:63392@0x4be9f084] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-23 05:10:45,618 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:45,623 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 05:10:45,626 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44288, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 05:10:45,637 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37433,1690089040778] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 05:10:45,650 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44294, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 05:10:45,651 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37441] ipc.CallRunner(144): callId: 1 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:44294 deadline: 1690089105650, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:45,654 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-23 05:10:45,656 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 05:10:45,660 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37441%2C1690089043078.meta, suffix=.meta, logDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/WALs/jenkins-hbase4.apache.org,37441,1690089043078, archiveDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/oldWALs, maxLogs=32 2023-07-23 05:10:45,681 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44583,DS-5b9ad8eb-3fe5-4ac8-8b1f-308316eed459,DISK] 2023-07-23 05:10:45,681 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39809,DS-bf1a56e0-d033-40d3-9e0b-ba7765a24a06,DISK] 2023-07-23 05:10:45,683 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37209,DS-cdb3efdb-89af-4924-a22c-6cfcc4bbae9b,DISK] 2023-07-23 05:10:45,690 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/WALs/jenkins-hbase4.apache.org,37441,1690089043078/jenkins-hbase4.apache.org%2C37441%2C1690089043078.meta.1690089045662.meta 2023-07-23 05:10:45,691 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39809,DS-bf1a56e0-d033-40d3-9e0b-ba7765a24a06,DISK], DatanodeInfoWithStorage[127.0.0.1:44583,DS-5b9ad8eb-3fe5-4ac8-8b1f-308316eed459,DISK], DatanodeInfoWithStorage[127.0.0.1:37209,DS-cdb3efdb-89af-4924-a22c-6cfcc4bbae9b,DISK]] 2023-07-23 05:10:45,726 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:10:45,732 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 05:10:45,738 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-23 05:10:45,740 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-23 05:10:45,747 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-23 05:10:45,748 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:45,748 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-23 05:10:45,748 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-23 05:10:45,767 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 05:10:45,785 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/info 2023-07-23 05:10:45,785 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/info 2023-07-23 05:10:45,786 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 05:10:45,787 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:45,788 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 05:10:45,790 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/rep_barrier 2023-07-23 05:10:45,790 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/rep_barrier 2023-07-23 05:10:45,790 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 05:10:45,793 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:45,793 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 05:10:45,795 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/table 2023-07-23 05:10:45,795 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/table 2023-07-23 05:10:45,796 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 05:10:45,797 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:45,799 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740 2023-07-23 05:10:45,803 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740 2023-07-23 05:10:45,807 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 05:10:45,817 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 05:10:45,821 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11588997440, jitterRate=0.07930949330329895}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 05:10:45,821 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 05:10:45,836 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690089045604 2023-07-23 05:10:45,864 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-23 05:10:45,865 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-23 05:10:45,865 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37441,1690089043078, state=OPEN 2023-07-23 05:10:45,869 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 05:10:45,869 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 05:10:45,875 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-23 05:10:45,875 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37441,1690089043078 in 444 msec 2023-07-23 05:10:45,881 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-23 05:10:45,881 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 658 msec 2023-07-23 05:10:45,887 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.0370 sec 2023-07-23 05:10:45,887 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690089045887, completionTime=-1 2023-07-23 05:10:45,887 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-23 05:10:45,887 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-23 05:10:45,946 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-23 05:10:45,946 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690089105946 2023-07-23 05:10:45,946 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690089165946 2023-07-23 05:10:45,947 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 59 msec 2023-07-23 05:10:45,980 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37433,1690089040778-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,981 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37433,1690089040778-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,982 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37433,1690089040778-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,989 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:37433, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:45,991 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:46,000 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-23 05:10:46,015 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-23 05:10:46,017 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 05:10:46,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-23 05:10:46,035 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 05:10:46,038 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 05:10:46,060 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/hbase/namespace/a44c79c49f6bdbba941d693414528c24 2023-07-23 05:10:46,063 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/hbase/namespace/a44c79c49f6bdbba941d693414528c24 empty. 2023-07-23 05:10:46,064 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/hbase/namespace/a44c79c49f6bdbba941d693414528c24 2023-07-23 05:10:46,064 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-23 05:10:46,113 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-23 05:10:46,115 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => a44c79c49f6bdbba941d693414528c24, NAME => 'hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:10:46,136 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:46,136 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing a44c79c49f6bdbba941d693414528c24, disabling compactions & flushes 2023-07-23 05:10:46,136 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24. 2023-07-23 05:10:46,136 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24. 2023-07-23 05:10:46,136 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24. after waiting 0 ms 2023-07-23 05:10:46,136 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24. 2023-07-23 05:10:46,136 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24. 2023-07-23 05:10:46,136 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for a44c79c49f6bdbba941d693414528c24: 2023-07-23 05:10:46,140 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 05:10:46,158 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690089046144"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089046144"}]},"ts":"1690089046144"} 2023-07-23 05:10:46,172 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37433,1690089040778] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 05:10:46,178 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37433,1690089040778] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-23 05:10:46,182 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 05:10:46,185 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 05:10:46,189 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310 2023-07-23 05:10:46,190 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 05:10:46,191 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310 empty. 2023-07-23 05:10:46,192 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310 2023-07-23 05:10:46,192 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-23 05:10:46,192 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 05:10:46,197 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089046192"}]},"ts":"1690089046192"} 2023-07-23 05:10:46,201 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-23 05:10:46,214 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:10:46,216 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:10:46,216 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:10:46,216 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:10:46,216 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:10:46,222 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=a44c79c49f6bdbba941d693414528c24, ASSIGN}] 2023-07-23 05:10:46,227 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=a44c79c49f6bdbba941d693414528c24, ASSIGN 2023-07-23 05:10:46,231 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-23 05:10:46,232 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=a44c79c49f6bdbba941d693414528c24, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46173,1690089043304; forceNewPlan=false, retain=false 2023-07-23 05:10:46,233 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => a6558fab23b07212eec6b6a195311310, NAME => 'hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:10:46,269 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:46,269 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing a6558fab23b07212eec6b6a195311310, disabling compactions & flushes 2023-07-23 05:10:46,269 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. 2023-07-23 05:10:46,269 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. 2023-07-23 05:10:46,269 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. after waiting 0 ms 2023-07-23 05:10:46,269 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. 2023-07-23 05:10:46,269 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. 2023-07-23 05:10:46,269 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for a6558fab23b07212eec6b6a195311310: 2023-07-23 05:10:46,274 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 05:10:46,276 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690089046276"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089046276"}]},"ts":"1690089046276"} 2023-07-23 05:10:46,280 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 05:10:46,282 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 05:10:46,283 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089046282"}]},"ts":"1690089046282"} 2023-07-23 05:10:46,285 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-23 05:10:46,290 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:10:46,290 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:10:46,290 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:10:46,290 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:10:46,290 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:10:46,290 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=a6558fab23b07212eec6b6a195311310, ASSIGN}] 2023-07-23 05:10:46,293 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=a6558fab23b07212eec6b6a195311310, ASSIGN 2023-07-23 05:10:46,295 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=a6558fab23b07212eec6b6a195311310, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45681,1690089042835; forceNewPlan=false, retain=false 2023-07-23 05:10:46,296 INFO [jenkins-hbase4:37433] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-23 05:10:46,298 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=a44c79c49f6bdbba941d693414528c24, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:46,298 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=a6558fab23b07212eec6b6a195311310, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:46,298 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690089046297"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089046297"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089046297"}]},"ts":"1690089046297"} 2023-07-23 05:10:46,298 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690089046297"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089046297"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089046297"}]},"ts":"1690089046297"} 2023-07-23 05:10:46,303 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=7, state=RUNNABLE; OpenRegionProcedure a6558fab23b07212eec6b6a195311310, server=jenkins-hbase4.apache.org,45681,1690089042835}] 2023-07-23 05:10:46,305 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=6, state=RUNNABLE; OpenRegionProcedure a44c79c49f6bdbba941d693414528c24, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:10:46,458 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:46,458 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 05:10:46,462 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:46,462 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 05:10:46,462 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41024, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 05:10:46,465 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37178, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 05:10:46,487 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. 2023-07-23 05:10:46,487 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a6558fab23b07212eec6b6a195311310, NAME => 'hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:10:46,487 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 05:10:46,487 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. service=MultiRowMutationService 2023-07-23 05:10:46,489 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-23 05:10:46,490 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup a6558fab23b07212eec6b6a195311310 2023-07-23 05:10:46,490 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:46,490 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a6558fab23b07212eec6b6a195311310 2023-07-23 05:10:46,490 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a6558fab23b07212eec6b6a195311310 2023-07-23 05:10:46,490 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24. 2023-07-23 05:10:46,491 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a44c79c49f6bdbba941d693414528c24, NAME => 'hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:10:46,491 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace a44c79c49f6bdbba941d693414528c24 2023-07-23 05:10:46,491 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:46,491 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a44c79c49f6bdbba941d693414528c24 2023-07-23 05:10:46,491 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a44c79c49f6bdbba941d693414528c24 2023-07-23 05:10:46,507 INFO [StoreOpener-a6558fab23b07212eec6b6a195311310-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region a6558fab23b07212eec6b6a195311310 2023-07-23 05:10:46,519 INFO [StoreOpener-a44c79c49f6bdbba941d693414528c24-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region a44c79c49f6bdbba941d693414528c24 2023-07-23 05:10:46,522 DEBUG [StoreOpener-a44c79c49f6bdbba941d693414528c24-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/namespace/a44c79c49f6bdbba941d693414528c24/info 2023-07-23 05:10:46,522 DEBUG [StoreOpener-a44c79c49f6bdbba941d693414528c24-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/namespace/a44c79c49f6bdbba941d693414528c24/info 2023-07-23 05:10:46,523 INFO [StoreOpener-a44c79c49f6bdbba941d693414528c24-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a44c79c49f6bdbba941d693414528c24 columnFamilyName info 2023-07-23 05:10:46,523 DEBUG [StoreOpener-a6558fab23b07212eec6b6a195311310-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310/m 2023-07-23 05:10:46,523 DEBUG [StoreOpener-a6558fab23b07212eec6b6a195311310-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310/m 2023-07-23 05:10:46,524 INFO [StoreOpener-a6558fab23b07212eec6b6a195311310-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a6558fab23b07212eec6b6a195311310 columnFamilyName m 2023-07-23 05:10:46,524 INFO [StoreOpener-a44c79c49f6bdbba941d693414528c24-1] regionserver.HStore(310): Store=a44c79c49f6bdbba941d693414528c24/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:46,529 INFO [StoreOpener-a6558fab23b07212eec6b6a195311310-1] regionserver.HStore(310): Store=a6558fab23b07212eec6b6a195311310/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:46,532 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310 2023-07-23 05:10:46,532 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/namespace/a44c79c49f6bdbba941d693414528c24 2023-07-23 05:10:46,532 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310 2023-07-23 05:10:46,534 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/namespace/a44c79c49f6bdbba941d693414528c24 2023-07-23 05:10:46,543 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a6558fab23b07212eec6b6a195311310 2023-07-23 05:10:46,544 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a44c79c49f6bdbba941d693414528c24 2023-07-23 05:10:46,552 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:10:46,552 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/namespace/a44c79c49f6bdbba941d693414528c24/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:10:46,553 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a6558fab23b07212eec6b6a195311310; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@17929218, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:46,553 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a6558fab23b07212eec6b6a195311310: 2023-07-23 05:10:46,553 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a44c79c49f6bdbba941d693414528c24; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11776597920, jitterRate=0.09678114950656891}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:46,553 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a44c79c49f6bdbba941d693414528c24: 2023-07-23 05:10:46,557 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24., pid=9, masterSystemTime=1690089046462 2023-07-23 05:10:46,560 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310., pid=8, masterSystemTime=1690089046457 2023-07-23 05:10:46,566 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24. 2023-07-23 05:10:46,567 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24. 2023-07-23 05:10:46,568 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=a44c79c49f6bdbba941d693414528c24, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:46,568 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690089046567"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089046567"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089046567"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089046567"}]},"ts":"1690089046567"} 2023-07-23 05:10:46,571 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. 2023-07-23 05:10:46,572 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. 2023-07-23 05:10:46,578 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=a6558fab23b07212eec6b6a195311310, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:46,579 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690089046578"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089046578"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089046578"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089046578"}]},"ts":"1690089046578"} 2023-07-23 05:10:46,585 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=6 2023-07-23 05:10:46,585 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=6, state=SUCCESS; OpenRegionProcedure a44c79c49f6bdbba941d693414528c24, server=jenkins-hbase4.apache.org,46173,1690089043304 in 267 msec 2023-07-23 05:10:46,598 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=7 2023-07-23 05:10:46,599 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=7, state=SUCCESS; OpenRegionProcedure a6558fab23b07212eec6b6a195311310, server=jenkins-hbase4.apache.org,45681,1690089042835 in 281 msec 2023-07-23 05:10:46,599 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-23 05:10:46,600 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=a44c79c49f6bdbba941d693414528c24, ASSIGN in 363 msec 2023-07-23 05:10:46,601 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 05:10:46,601 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089046601"}]},"ts":"1690089046601"} 2023-07-23 05:10:46,605 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-23 05:10:46,605 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=a6558fab23b07212eec6b6a195311310, ASSIGN in 309 msec 2023-07-23 05:10:46,605 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-23 05:10:46,606 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 05:10:46,606 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089046606"}]},"ts":"1690089046606"} 2023-07-23 05:10:46,611 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-23 05:10:46,613 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 05:10:46,615 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 05:10:46,619 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 595 msec 2023-07-23 05:10:46,620 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 443 msec 2023-07-23 05:10:46,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-23 05:10:46,637 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-23 05:10:46,637 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:10:46,663 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 05:10:46,673 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37194, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 05:10:46,690 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37433,1690089040778] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 05:10:46,695 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-23 05:10:46,714 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41032, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 05:10:46,724 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37433,1690089040778] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-23 05:10:46,724 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37433,1690089040778] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-23 05:10:46,736 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 05:10:46,743 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 61 msec 2023-07-23 05:10:46,751 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-23 05:10:46,772 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 05:10:46,778 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 25 msec 2023-07-23 05:10:46,787 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-23 05:10:46,791 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-23 05:10:46,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.233sec 2023-07-23 05:10:46,796 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-23 05:10:46,797 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-23 05:10:46,797 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-23 05:10:46,799 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37433,1690089040778-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-23 05:10:46,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37433,1690089040778-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-23 05:10:46,801 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:10:46,801 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37433,1690089040778] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:46,803 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37433,1690089040778] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 05:10:46,811 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,37433,1690089040778] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-23 05:10:46,814 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-23 05:10:46,830 DEBUG [Listener at localhost/44477] zookeeper.ReadOnlyZKClient(139): Connect 0x518a774a to 127.0.0.1:63392 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:10:46,836 DEBUG [Listener at localhost/44477] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3041a83e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:10:46,856 DEBUG [hconnection-0x2a5e2fc3-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 05:10:46,875 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44304, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 05:10:46,887 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,37433,1690089040778 2023-07-23 05:10:46,889 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:10:46,900 DEBUG [Listener at localhost/44477] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-23 05:10:46,906 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39966, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-23 05:10:46,922 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-23 05:10:46,922 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:10:46,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-23 05:10:46,929 DEBUG [Listener at localhost/44477] zookeeper.ReadOnlyZKClient(139): Connect 0x09752cf4 to 127.0.0.1:63392 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:10:46,934 DEBUG [Listener at localhost/44477] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3b21cb93, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:10:46,935 INFO [Listener at localhost/44477] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:63392 2023-07-23 05:10:46,937 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 05:10:46,939 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1019096aaec000a connected 2023-07-23 05:10:46,969 INFO [Listener at localhost/44477] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=423, OpenFileDescriptor=684, MaxFileDescriptor=60000, SystemLoadAverage=475, ProcessCount=177, AvailableMemoryMB=6965 2023-07-23 05:10:46,972 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-23 05:10:46,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:47,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:47,044 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-23 05:10:47,062 INFO [Listener at localhost/44477] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 05:10:47,063 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:10:47,063 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 05:10:47,063 INFO [Listener at localhost/44477] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 05:10:47,063 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:10:47,063 INFO [Listener at localhost/44477] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 05:10:47,063 INFO [Listener at localhost/44477] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 05:10:47,067 INFO [Listener at localhost/44477] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41981 2023-07-23 05:10:47,068 INFO [Listener at localhost/44477] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 05:10:47,069 DEBUG [Listener at localhost/44477] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 05:10:47,071 INFO [Listener at localhost/44477] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:10:47,075 INFO [Listener at localhost/44477] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:10:47,078 INFO [Listener at localhost/44477] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41981 connecting to ZooKeeper ensemble=127.0.0.1:63392 2023-07-23 05:10:47,083 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:419810x0, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 05:10:47,086 DEBUG [Listener at localhost/44477] zookeeper.ZKUtil(162): regionserver:419810x0, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 05:10:47,087 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41981-0x1019096aaec000b connected 2023-07-23 05:10:47,088 DEBUG [Listener at localhost/44477] zookeeper.ZKUtil(162): regionserver:41981-0x1019096aaec000b, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-23 05:10:47,089 DEBUG [Listener at localhost/44477] zookeeper.ZKUtil(164): regionserver:41981-0x1019096aaec000b, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 05:10:47,091 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41981 2023-07-23 05:10:47,094 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41981 2023-07-23 05:10:47,095 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41981 2023-07-23 05:10:47,098 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41981 2023-07-23 05:10:47,099 DEBUG [Listener at localhost/44477] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41981 2023-07-23 05:10:47,101 INFO [Listener at localhost/44477] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 05:10:47,101 INFO [Listener at localhost/44477] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 05:10:47,101 INFO [Listener at localhost/44477] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 05:10:47,102 INFO [Listener at localhost/44477] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 05:10:47,102 INFO [Listener at localhost/44477] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 05:10:47,102 INFO [Listener at localhost/44477] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 05:10:47,102 INFO [Listener at localhost/44477] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 05:10:47,103 INFO [Listener at localhost/44477] http.HttpServer(1146): Jetty bound to port 38153 2023-07-23 05:10:47,103 INFO [Listener at localhost/44477] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 05:10:47,111 INFO [Listener at localhost/44477] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:10:47,111 INFO [Listener at localhost/44477] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1439103a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/hadoop.log.dir/,AVAILABLE} 2023-07-23 05:10:47,112 INFO [Listener at localhost/44477] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:10:47,112 INFO [Listener at localhost/44477] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1516c024{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 05:10:47,246 INFO [Listener at localhost/44477] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 05:10:47,248 INFO [Listener at localhost/44477] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 05:10:47,248 INFO [Listener at localhost/44477] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 05:10:47,248 INFO [Listener at localhost/44477] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 05:10:47,250 INFO [Listener at localhost/44477] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:10:47,251 INFO [Listener at localhost/44477] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5bcd3e79{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/java.io.tmpdir/jetty-0_0_0_0-38153-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5855085460491218278/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:10:47,253 INFO [Listener at localhost/44477] server.AbstractConnector(333): Started ServerConnector@5a5d92bc{HTTP/1.1, (http/1.1)}{0.0.0.0:38153} 2023-07-23 05:10:47,253 INFO [Listener at localhost/44477] server.Server(415): Started @12093ms 2023-07-23 05:10:47,263 INFO [RS:3;jenkins-hbase4:41981] regionserver.HRegionServer(951): ClusterId : 91356a8d-4149-4a07-ac08-30b1745b5070 2023-07-23 05:10:47,268 DEBUG [RS:3;jenkins-hbase4:41981] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 05:10:47,271 DEBUG [RS:3;jenkins-hbase4:41981] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 05:10:47,271 DEBUG [RS:3;jenkins-hbase4:41981] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 05:10:47,274 DEBUG [RS:3;jenkins-hbase4:41981] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 05:10:47,276 DEBUG [RS:3;jenkins-hbase4:41981] zookeeper.ReadOnlyZKClient(139): Connect 0x5a8d6a49 to 127.0.0.1:63392 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:10:47,287 DEBUG [RS:3;jenkins-hbase4:41981] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@544a9f68, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:10:47,287 DEBUG [RS:3;jenkins-hbase4:41981] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@50750139, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 05:10:47,298 DEBUG [RS:3;jenkins-hbase4:41981] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:41981 2023-07-23 05:10:47,298 INFO [RS:3;jenkins-hbase4:41981] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 05:10:47,298 INFO [RS:3;jenkins-hbase4:41981] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 05:10:47,298 DEBUG [RS:3;jenkins-hbase4:41981] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 05:10:47,299 INFO [RS:3;jenkins-hbase4:41981] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,37433,1690089040778 with isa=jenkins-hbase4.apache.org/172.31.14.131:41981, startcode=1690089047062 2023-07-23 05:10:47,299 DEBUG [RS:3;jenkins-hbase4:41981] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 05:10:47,305 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41187, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 05:10:47,306 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37433] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:47,306 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37433,1690089040778] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 05:10:47,312 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37433,1690089040778] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:47,313 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37433,1690089040778] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 05:10:47,318 DEBUG [RS:3;jenkins-hbase4:41981] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04 2023-07-23 05:10:47,318 DEBUG [RS:3;jenkins-hbase4:41981] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36893 2023-07-23 05:10:47,318 DEBUG [RS:3;jenkins-hbase4:41981] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37311 2023-07-23 05:10:47,322 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,37433,1690089040778] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-23 05:10:47,323 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:10:47,323 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:10:47,323 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:10:47,323 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:10:47,325 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:47,325 DEBUG [RS:3;jenkins-hbase4:41981] zookeeper.ZKUtil(162): regionserver:41981-0x1019096aaec000b, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:47,325 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41981,1690089047062] 2023-07-23 05:10:47,325 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:47,325 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:47,325 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:47,326 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:47,326 WARN [RS:3;jenkins-hbase4:41981] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 05:10:47,326 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:47,326 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:47,327 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:47,327 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:47,327 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:47,327 INFO [RS:3;jenkins-hbase4:41981] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 05:10:47,327 DEBUG [RS:3;jenkins-hbase4:41981] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/WALs/jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:47,328 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:47,328 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:47,339 DEBUG [RS:3;jenkins-hbase4:41981] zookeeper.ZKUtil(162): regionserver:41981-0x1019096aaec000b, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:47,339 DEBUG [RS:3;jenkins-hbase4:41981] zookeeper.ZKUtil(162): regionserver:41981-0x1019096aaec000b, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:47,340 DEBUG [RS:3;jenkins-hbase4:41981] zookeeper.ZKUtil(162): regionserver:41981-0x1019096aaec000b, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:47,341 DEBUG [RS:3;jenkins-hbase4:41981] zookeeper.ZKUtil(162): regionserver:41981-0x1019096aaec000b, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:47,342 DEBUG [RS:3;jenkins-hbase4:41981] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 05:10:47,342 INFO [RS:3;jenkins-hbase4:41981] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 05:10:47,350 INFO [RS:3;jenkins-hbase4:41981] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 05:10:47,350 INFO [RS:3;jenkins-hbase4:41981] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 05:10:47,351 INFO [RS:3;jenkins-hbase4:41981] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:47,351 INFO [RS:3;jenkins-hbase4:41981] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 05:10:47,354 INFO [RS:3;jenkins-hbase4:41981] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:47,354 DEBUG [RS:3;jenkins-hbase4:41981] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:47,354 DEBUG [RS:3;jenkins-hbase4:41981] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:47,354 DEBUG [RS:3;jenkins-hbase4:41981] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:47,354 DEBUG [RS:3;jenkins-hbase4:41981] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:47,354 DEBUG [RS:3;jenkins-hbase4:41981] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:47,354 DEBUG [RS:3;jenkins-hbase4:41981] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 05:10:47,354 DEBUG [RS:3;jenkins-hbase4:41981] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:47,355 DEBUG [RS:3;jenkins-hbase4:41981] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:47,355 DEBUG [RS:3;jenkins-hbase4:41981] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:47,355 DEBUG [RS:3;jenkins-hbase4:41981] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:10:47,365 INFO [RS:3;jenkins-hbase4:41981] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:47,365 INFO [RS:3;jenkins-hbase4:41981] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:47,365 INFO [RS:3;jenkins-hbase4:41981] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:47,378 INFO [RS:3;jenkins-hbase4:41981] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 05:10:47,378 INFO [RS:3;jenkins-hbase4:41981] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41981,1690089047062-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:10:47,389 INFO [RS:3;jenkins-hbase4:41981] regionserver.Replication(203): jenkins-hbase4.apache.org,41981,1690089047062 started 2023-07-23 05:10:47,390 INFO [RS:3;jenkins-hbase4:41981] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41981,1690089047062, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41981, sessionid=0x1019096aaec000b 2023-07-23 05:10:47,390 DEBUG [RS:3;jenkins-hbase4:41981] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 05:10:47,390 DEBUG [RS:3;jenkins-hbase4:41981] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:47,390 DEBUG [RS:3;jenkins-hbase4:41981] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41981,1690089047062' 2023-07-23 05:10:47,390 DEBUG [RS:3;jenkins-hbase4:41981] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 05:10:47,391 DEBUG [RS:3;jenkins-hbase4:41981] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 05:10:47,391 DEBUG [RS:3;jenkins-hbase4:41981] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 05:10:47,391 DEBUG [RS:3;jenkins-hbase4:41981] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 05:10:47,391 DEBUG [RS:3;jenkins-hbase4:41981] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:47,391 DEBUG [RS:3;jenkins-hbase4:41981] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41981,1690089047062' 2023-07-23 05:10:47,391 DEBUG [RS:3;jenkins-hbase4:41981] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 05:10:47,393 DEBUG [RS:3;jenkins-hbase4:41981] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 05:10:47,394 DEBUG [RS:3;jenkins-hbase4:41981] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 05:10:47,394 INFO [RS:3;jenkins-hbase4:41981] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 05:10:47,394 INFO [RS:3;jenkins-hbase4:41981] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 05:10:47,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:10:47,401 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:47,402 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:47,404 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:10:47,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:10:47,412 DEBUG [hconnection-0x2db71259-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 05:10:47,416 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44320, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 05:10:47,425 DEBUG [hconnection-0x2db71259-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 05:10:47,428 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41044, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 05:10:47,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:47,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:47,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37433] to rsgroup master 2023-07-23 05:10:47,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:10:47,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:39966 deadline: 1690090247441, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. 2023-07-23 05:10:47,443 WARN [Listener at localhost/44477] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:10:47,445 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:10:47,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:47,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:47,448 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981, jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:46173], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:10:47,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:10:47,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:10:47,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:10:47,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:10:47,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:47,463 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:47,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:47,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:47,471 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:10:47,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:10:47,480 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:47,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:47,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981] to rsgroup Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:47,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:47,489 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:47,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:47,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:10:47,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:47,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:10:47,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:10:47,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:10:47,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:10:47,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:10:47,498 INFO [RS:3;jenkins-hbase4:41981] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41981%2C1690089047062, suffix=, logDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/WALs/jenkins-hbase4.apache.org,41981,1690089047062, archiveDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/oldWALs, maxLogs=32 2023-07-23 05:10:47,501 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-23 05:10:47,501 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-23 05:10:47,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-23 05:10:47,503 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37441,1690089043078, state=CLOSING 2023-07-23 05:10:47,505 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 05:10:47,505 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 05:10:47,505 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37441,1690089043078}] 2023-07-23 05:10:47,548 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37209,DS-cdb3efdb-89af-4924-a22c-6cfcc4bbae9b,DISK] 2023-07-23 05:10:47,548 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44583,DS-5b9ad8eb-3fe5-4ac8-8b1f-308316eed459,DISK] 2023-07-23 05:10:47,553 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39809,DS-bf1a56e0-d033-40d3-9e0b-ba7765a24a06,DISK] 2023-07-23 05:10:47,557 INFO [RS:3;jenkins-hbase4:41981] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/WALs/jenkins-hbase4.apache.org,41981,1690089047062/jenkins-hbase4.apache.org%2C41981%2C1690089047062.1690089047500 2023-07-23 05:10:47,562 DEBUG [RS:3;jenkins-hbase4:41981] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37209,DS-cdb3efdb-89af-4924-a22c-6cfcc4bbae9b,DISK], DatanodeInfoWithStorage[127.0.0.1:39809,DS-bf1a56e0-d033-40d3-9e0b-ba7765a24a06,DISK], DatanodeInfoWithStorage[127.0.0.1:44583,DS-5b9ad8eb-3fe5-4ac8-8b1f-308316eed459,DISK]] 2023-07-23 05:10:47,669 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-23 05:10:47,670 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 05:10:47,670 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 05:10:47,670 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 05:10:47,671 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 05:10:47,671 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 05:10:47,672 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.49 KB heapSize=5 KB 2023-07-23 05:10:47,796 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.31 KB at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/.tmp/info/b079d74b4e6e4495b7bd7cab009d36ff 2023-07-23 05:10:47,887 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=14 (bloomFilter=false), to=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/.tmp/table/f9579ee4ed104992a28615a3d0a38395 2023-07-23 05:10:47,900 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/.tmp/info/b079d74b4e6e4495b7bd7cab009d36ff as hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/info/b079d74b4e6e4495b7bd7cab009d36ff 2023-07-23 05:10:47,917 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/info/b079d74b4e6e4495b7bd7cab009d36ff, entries=20, sequenceid=14, filesize=7.0 K 2023-07-23 05:10:47,922 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/.tmp/table/f9579ee4ed104992a28615a3d0a38395 as hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/table/f9579ee4ed104992a28615a3d0a38395 2023-07-23 05:10:47,934 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/table/f9579ee4ed104992a28615a3d0a38395, entries=4, sequenceid=14, filesize=4.8 K 2023-07-23 05:10:47,941 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.49 KB/2550, heapSize ~4.72 KB/4832, currentSize=0 B/0 for 1588230740 in 269ms, sequenceid=14, compaction requested=false 2023-07-23 05:10:47,943 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-23 05:10:47,962 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/recovered.edits/17.seqid, newMaxSeqId=17, maxSeqId=1 2023-07-23 05:10:47,963 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 05:10:47,964 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 05:10:47,964 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 05:10:47,964 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,46173,1690089043304 record at close sequenceid=14 2023-07-23 05:10:47,967 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-23 05:10:47,968 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-23 05:10:47,974 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-23 05:10:47,974 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37441,1690089043078 in 463 msec 2023-07-23 05:10:47,975 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46173,1690089043304; forceNewPlan=false, retain=false 2023-07-23 05:10:48,125 INFO [jenkins-hbase4:37433] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 05:10:48,126 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46173,1690089043304, state=OPENING 2023-07-23 05:10:48,133 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 05:10:48,133 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 05:10:48,133 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:10:48,291 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-23 05:10:48,291 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 05:10:48,294 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46173%2C1690089043304.meta, suffix=.meta, logDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/WALs/jenkins-hbase4.apache.org,46173,1690089043304, archiveDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/oldWALs, maxLogs=32 2023-07-23 05:10:48,315 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37209,DS-cdb3efdb-89af-4924-a22c-6cfcc4bbae9b,DISK] 2023-07-23 05:10:48,316 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39809,DS-bf1a56e0-d033-40d3-9e0b-ba7765a24a06,DISK] 2023-07-23 05:10:48,320 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44583,DS-5b9ad8eb-3fe5-4ac8-8b1f-308316eed459,DISK] 2023-07-23 05:10:48,325 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/WALs/jenkins-hbase4.apache.org,46173,1690089043304/jenkins-hbase4.apache.org%2C46173%2C1690089043304.meta.1690089048295.meta 2023-07-23 05:10:48,325 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37209,DS-cdb3efdb-89af-4924-a22c-6cfcc4bbae9b,DISK], DatanodeInfoWithStorage[127.0.0.1:39809,DS-bf1a56e0-d033-40d3-9e0b-ba7765a24a06,DISK], DatanodeInfoWithStorage[127.0.0.1:44583,DS-5b9ad8eb-3fe5-4ac8-8b1f-308316eed459,DISK]] 2023-07-23 05:10:48,325 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:10:48,326 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 05:10:48,326 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-23 05:10:48,326 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-23 05:10:48,326 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-23 05:10:48,326 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:48,326 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-23 05:10:48,326 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-23 05:10:48,331 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 05:10:48,332 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/info 2023-07-23 05:10:48,332 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/info 2023-07-23 05:10:48,333 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 05:10:48,348 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/info/b079d74b4e6e4495b7bd7cab009d36ff 2023-07-23 05:10:48,349 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:48,349 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 05:10:48,351 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/rep_barrier 2023-07-23 05:10:48,351 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/rep_barrier 2023-07-23 05:10:48,352 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 05:10:48,353 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:48,353 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 05:10:48,354 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/table 2023-07-23 05:10:48,354 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/table 2023-07-23 05:10:48,354 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 05:10:48,371 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/table/f9579ee4ed104992a28615a3d0a38395 2023-07-23 05:10:48,372 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:48,373 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740 2023-07-23 05:10:48,375 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740 2023-07-23 05:10:48,379 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 05:10:48,382 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 05:10:48,383 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=18; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9444438400, jitterRate=-0.12041813135147095}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 05:10:48,383 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 05:10:48,385 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=14, masterSystemTime=1690089048286 2023-07-23 05:10:48,387 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-23 05:10:48,387 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-23 05:10:48,388 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,46173,1690089043304, state=OPEN 2023-07-23 05:10:48,390 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 05:10:48,390 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 05:10:48,393 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-23 05:10:48,393 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,46173,1690089043304 in 257 msec 2023-07-23 05:10:48,395 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 894 msec 2023-07-23 05:10:48,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-23 05:10:48,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1690089043078, jenkins-hbase4.apache.org,41981,1690089047062] are moved back to default 2023-07-23 05:10:48,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:48,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:10:48,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:48,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:48,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:48,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:10:48,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 05:10:48,528 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 05:10:48,531 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 05:10:48,536 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 15 2023-07-23 05:10:48,537 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:48,537 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:48,538 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:48,538 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:10:48,546 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-23 05:10:48,547 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 05:10:48,549 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37441] ipc.CallRunner(144): callId: 40 service: ClientService methodName: Get size: 151 connection: 172.31.14.131:44294 deadline: 1690089108548, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46173 startCode=1690089043304. As of locationSeqNum=14. 2023-07-23 05:10:48,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-23 05:10:48,659 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:48,660 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:48,660 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655 2023-07-23 05:10:48,660 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322 empty. 2023-07-23 05:10:48,660 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:48,661 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9 2023-07-23 05:10:48,661 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458 empty. 2023-07-23 05:10:48,661 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655 empty. 2023-07-23 05:10:48,661 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:48,666 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1 empty. 2023-07-23 05:10:48,667 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655 2023-07-23 05:10:48,667 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:48,667 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9 empty. 2023-07-23 05:10:48,667 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:48,667 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9 2023-07-23 05:10:48,668 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-23 05:10:48,695 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-23 05:10:48,696 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5c5ed9bf96bfd001c34b57b1293ac322, NAME => 'Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:10:48,697 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 6955eb3f7d33cecac81a52c3cad1f458, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:10:48,697 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => a30738775011c63d57d74eb291843655, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:10:48,765 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:48,766 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 5c5ed9bf96bfd001c34b57b1293ac322, disabling compactions & flushes 2023-07-23 05:10:48,766 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. 2023-07-23 05:10:48,766 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. 2023-07-23 05:10:48,766 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. after waiting 0 ms 2023-07-23 05:10:48,766 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. 2023-07-23 05:10:48,766 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. 2023-07-23 05:10:48,766 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 5c5ed9bf96bfd001c34b57b1293ac322: 2023-07-23 05:10:48,767 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => eee48e49fb983f0b754fb402c78f98d1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:10:48,769 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:48,769 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:48,770 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing a30738775011c63d57d74eb291843655, disabling compactions & flushes 2023-07-23 05:10:48,771 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 6955eb3f7d33cecac81a52c3cad1f458, disabling compactions & flushes 2023-07-23 05:10:48,771 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. 2023-07-23 05:10:48,771 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. 2023-07-23 05:10:48,771 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. 2023-07-23 05:10:48,771 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. 2023-07-23 05:10:48,771 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. after waiting 0 ms 2023-07-23 05:10:48,771 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. after waiting 0 ms 2023-07-23 05:10:48,771 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. 2023-07-23 05:10:48,771 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. 2023-07-23 05:10:48,771 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. 2023-07-23 05:10:48,771 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. 2023-07-23 05:10:48,771 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for a30738775011c63d57d74eb291843655: 2023-07-23 05:10:48,772 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 6955eb3f7d33cecac81a52c3cad1f458: 2023-07-23 05:10:48,772 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => de03e3a438580d72768668169e6084d9, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:10:48,795 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:48,795 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing eee48e49fb983f0b754fb402c78f98d1, disabling compactions & flushes 2023-07-23 05:10:48,795 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. 2023-07-23 05:10:48,795 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. 2023-07-23 05:10:48,795 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. after waiting 0 ms 2023-07-23 05:10:48,795 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. 2023-07-23 05:10:48,795 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. 2023-07-23 05:10:48,795 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for eee48e49fb983f0b754fb402c78f98d1: 2023-07-23 05:10:48,798 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:48,799 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing de03e3a438580d72768668169e6084d9, disabling compactions & flushes 2023-07-23 05:10:48,799 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. 2023-07-23 05:10:48,799 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. 2023-07-23 05:10:48,799 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. after waiting 0 ms 2023-07-23 05:10:48,799 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. 2023-07-23 05:10:48,799 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. 2023-07-23 05:10:48,799 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for de03e3a438580d72768668169e6084d9: 2023-07-23 05:10:48,806 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 05:10:48,807 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089048807"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089048807"}]},"ts":"1690089048807"} 2023-07-23 05:10:48,807 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690089048523.a30738775011c63d57d74eb291843655.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089048807"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089048807"}]},"ts":"1690089048807"} 2023-07-23 05:10:48,807 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089048807"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089048807"}]},"ts":"1690089048807"} 2023-07-23 05:10:48,807 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089048807"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089048807"}]},"ts":"1690089048807"} 2023-07-23 05:10:48,808 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089048807"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089048807"}]},"ts":"1690089048807"} 2023-07-23 05:10:48,852 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-23 05:10:48,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-23 05:10:48,854 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 05:10:48,854 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089048854"}]},"ts":"1690089048854"} 2023-07-23 05:10:48,856 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-23 05:10:48,862 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:10:48,862 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:10:48,862 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:10:48,862 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:10:48,862 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5ed9bf96bfd001c34b57b1293ac322, ASSIGN}, {pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6955eb3f7d33cecac81a52c3cad1f458, ASSIGN}, {pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a30738775011c63d57d74eb291843655, ASSIGN}, {pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eee48e49fb983f0b754fb402c78f98d1, ASSIGN}, {pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de03e3a438580d72768668169e6084d9, ASSIGN}] 2023-07-23 05:10:48,865 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de03e3a438580d72768668169e6084d9, ASSIGN 2023-07-23 05:10:48,865 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5ed9bf96bfd001c34b57b1293ac322, ASSIGN 2023-07-23 05:10:48,866 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6955eb3f7d33cecac81a52c3cad1f458, ASSIGN 2023-07-23 05:10:48,866 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eee48e49fb983f0b754fb402c78f98d1, ASSIGN 2023-07-23 05:10:48,868 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=16, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5ed9bf96bfd001c34b57b1293ac322, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46173,1690089043304; forceNewPlan=false, retain=false 2023-07-23 05:10:48,868 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a30738775011c63d57d74eb291843655, ASSIGN 2023-07-23 05:10:48,868 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de03e3a438580d72768668169e6084d9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46173,1690089043304; forceNewPlan=false, retain=false 2023-07-23 05:10:48,868 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eee48e49fb983f0b754fb402c78f98d1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45681,1690089042835; forceNewPlan=false, retain=false 2023-07-23 05:10:48,868 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6955eb3f7d33cecac81a52c3cad1f458, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46173,1690089043304; forceNewPlan=false, retain=false 2023-07-23 05:10:48,870 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=18, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a30738775011c63d57d74eb291843655, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45681,1690089042835; forceNewPlan=false, retain=false 2023-07-23 05:10:49,018 INFO [jenkins-hbase4:37433] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-23 05:10:49,022 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=5c5ed9bf96bfd001c34b57b1293ac322, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:49,022 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=a30738775011c63d57d74eb291843655, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:49,022 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=6955eb3f7d33cecac81a52c3cad1f458, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:49,023 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089049022"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089049022"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089049022"}]},"ts":"1690089049022"} 2023-07-23 05:10:49,022 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=eee48e49fb983f0b754fb402c78f98d1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:49,023 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690089048523.a30738775011c63d57d74eb291843655.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089049022"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089049022"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089049022"}]},"ts":"1690089049022"} 2023-07-23 05:10:49,022 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=de03e3a438580d72768668169e6084d9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:49,023 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089049022"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089049022"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089049022"}]},"ts":"1690089049022"} 2023-07-23 05:10:49,023 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089049022"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089049022"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089049022"}]},"ts":"1690089049022"} 2023-07-23 05:10:49,023 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089049022"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089049022"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089049022"}]},"ts":"1690089049022"} 2023-07-23 05:10:49,026 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=16, state=RUNNABLE; OpenRegionProcedure 5c5ed9bf96bfd001c34b57b1293ac322, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:10:49,031 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=17, state=RUNNABLE; OpenRegionProcedure 6955eb3f7d33cecac81a52c3cad1f458, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:10:49,032 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=23, ppid=19, state=RUNNABLE; OpenRegionProcedure eee48e49fb983f0b754fb402c78f98d1, server=jenkins-hbase4.apache.org,45681,1690089042835}] 2023-07-23 05:10:49,032 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=18, state=RUNNABLE; OpenRegionProcedure a30738775011c63d57d74eb291843655, server=jenkins-hbase4.apache.org,45681,1690089042835}] 2023-07-23 05:10:49,035 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=20, state=RUNNABLE; OpenRegionProcedure de03e3a438580d72768668169e6084d9, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:10:49,154 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-23 05:10:49,190 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. 2023-07-23 05:10:49,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6955eb3f7d33cecac81a52c3cad1f458, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-23 05:10:49,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:49,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:49,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:49,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:49,192 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. 2023-07-23 05:10:49,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => eee48e49fb983f0b754fb402c78f98d1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-23 05:10:49,193 INFO [StoreOpener-6955eb3f7d33cecac81a52c3cad1f458-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:49,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:49,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:49,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:49,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:49,195 INFO [StoreOpener-eee48e49fb983f0b754fb402c78f98d1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:49,195 DEBUG [StoreOpener-6955eb3f7d33cecac81a52c3cad1f458-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458/f 2023-07-23 05:10:49,195 DEBUG [StoreOpener-6955eb3f7d33cecac81a52c3cad1f458-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458/f 2023-07-23 05:10:49,196 INFO [StoreOpener-6955eb3f7d33cecac81a52c3cad1f458-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6955eb3f7d33cecac81a52c3cad1f458 columnFamilyName f 2023-07-23 05:10:49,196 INFO [StoreOpener-6955eb3f7d33cecac81a52c3cad1f458-1] regionserver.HStore(310): Store=6955eb3f7d33cecac81a52c3cad1f458/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:49,197 DEBUG [StoreOpener-eee48e49fb983f0b754fb402c78f98d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1/f 2023-07-23 05:10:49,198 DEBUG [StoreOpener-eee48e49fb983f0b754fb402c78f98d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1/f 2023-07-23 05:10:49,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:49,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:49,199 INFO [StoreOpener-eee48e49fb983f0b754fb402c78f98d1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region eee48e49fb983f0b754fb402c78f98d1 columnFamilyName f 2023-07-23 05:10:49,200 INFO [StoreOpener-eee48e49fb983f0b754fb402c78f98d1-1] regionserver.HStore(310): Store=eee48e49fb983f0b754fb402c78f98d1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:49,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:49,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:49,205 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:49,206 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:49,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:10:49,209 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6955eb3f7d33cecac81a52c3cad1f458; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11249433920, jitterRate=0.0476851761341095}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:49,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6955eb3f7d33cecac81a52c3cad1f458: 2023-07-23 05:10:49,209 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:10:49,210 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458., pid=22, masterSystemTime=1690089049185 2023-07-23 05:10:49,210 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened eee48e49fb983f0b754fb402c78f98d1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10307226080, jitterRate=-0.04006476700305939}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:49,210 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for eee48e49fb983f0b754fb402c78f98d1: 2023-07-23 05:10:49,212 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1., pid=23, masterSystemTime=1690089049188 2023-07-23 05:10:49,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. 2023-07-23 05:10:49,214 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. 2023-07-23 05:10:49,214 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. 2023-07-23 05:10:49,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => de03e3a438580d72768668169e6084d9, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-23 05:10:49,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop de03e3a438580d72768668169e6084d9 2023-07-23 05:10:49,214 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:49,215 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for de03e3a438580d72768668169e6084d9 2023-07-23 05:10:49,215 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for de03e3a438580d72768668169e6084d9 2023-07-23 05:10:49,215 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=6955eb3f7d33cecac81a52c3cad1f458, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:49,215 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089049215"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089049215"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089049215"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089049215"}]},"ts":"1690089049215"} 2023-07-23 05:10:49,216 INFO [StoreOpener-de03e3a438580d72768668169e6084d9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region de03e3a438580d72768668169e6084d9 2023-07-23 05:10:49,218 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=eee48e49fb983f0b754fb402c78f98d1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:49,219 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. 2023-07-23 05:10:49,219 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. 2023-07-23 05:10:49,219 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089049217"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089049217"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089049217"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089049217"}]},"ts":"1690089049217"} 2023-07-23 05:10:49,219 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. 2023-07-23 05:10:49,219 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a30738775011c63d57d74eb291843655, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-23 05:10:49,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop a30738775011c63d57d74eb291843655 2023-07-23 05:10:49,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:49,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a30738775011c63d57d74eb291843655 2023-07-23 05:10:49,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a30738775011c63d57d74eb291843655 2023-07-23 05:10:49,223 INFO [StoreOpener-a30738775011c63d57d74eb291843655-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a30738775011c63d57d74eb291843655 2023-07-23 05:10:49,225 DEBUG [StoreOpener-de03e3a438580d72768668169e6084d9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9/f 2023-07-23 05:10:49,225 DEBUG [StoreOpener-de03e3a438580d72768668169e6084d9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9/f 2023-07-23 05:10:49,226 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=17 2023-07-23 05:10:49,226 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=17, state=SUCCESS; OpenRegionProcedure 6955eb3f7d33cecac81a52c3cad1f458, server=jenkins-hbase4.apache.org,46173,1690089043304 in 188 msec 2023-07-23 05:10:49,226 INFO [StoreOpener-de03e3a438580d72768668169e6084d9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region de03e3a438580d72768668169e6084d9 columnFamilyName f 2023-07-23 05:10:49,227 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=23, resume processing ppid=19 2023-07-23 05:10:49,228 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=19, state=SUCCESS; OpenRegionProcedure eee48e49fb983f0b754fb402c78f98d1, server=jenkins-hbase4.apache.org,45681,1690089042835 in 190 msec 2023-07-23 05:10:49,228 INFO [StoreOpener-de03e3a438580d72768668169e6084d9-1] regionserver.HStore(310): Store=de03e3a438580d72768668169e6084d9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:49,229 DEBUG [StoreOpener-a30738775011c63d57d74eb291843655-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655/f 2023-07-23 05:10:49,233 DEBUG [StoreOpener-a30738775011c63d57d74eb291843655-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655/f 2023-07-23 05:10:49,234 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6955eb3f7d33cecac81a52c3cad1f458, ASSIGN in 364 msec 2023-07-23 05:10:49,234 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eee48e49fb983f0b754fb402c78f98d1, ASSIGN in 366 msec 2023-07-23 05:10:49,235 INFO [StoreOpener-a30738775011c63d57d74eb291843655-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a30738775011c63d57d74eb291843655 columnFamilyName f 2023-07-23 05:10:49,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9 2023-07-23 05:10:49,236 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9 2023-07-23 05:10:49,237 INFO [StoreOpener-a30738775011c63d57d74eb291843655-1] regionserver.HStore(310): Store=a30738775011c63d57d74eb291843655/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:49,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655 2023-07-23 05:10:49,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655 2023-07-23 05:10:49,242 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for de03e3a438580d72768668169e6084d9 2023-07-23 05:10:49,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a30738775011c63d57d74eb291843655 2023-07-23 05:10:49,246 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:10:49,247 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened de03e3a438580d72768668169e6084d9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11635046560, jitterRate=0.08359815180301666}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:49,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for de03e3a438580d72768668169e6084d9: 2023-07-23 05:10:49,248 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9., pid=25, masterSystemTime=1690089049185 2023-07-23 05:10:49,248 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:10:49,249 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a30738775011c63d57d74eb291843655; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11963807680, jitterRate=0.11421641707420349}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:49,249 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a30738775011c63d57d74eb291843655: 2023-07-23 05:10:49,250 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655., pid=24, masterSystemTime=1690089049188 2023-07-23 05:10:49,251 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. 2023-07-23 05:10:49,251 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. 2023-07-23 05:10:49,251 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. 2023-07-23 05:10:49,251 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5c5ed9bf96bfd001c34b57b1293ac322, NAME => 'Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-23 05:10:49,252 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=de03e3a438580d72768668169e6084d9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:49,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:49,252 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089049251"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089049251"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089049251"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089049251"}]},"ts":"1690089049251"} 2023-07-23 05:10:49,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:49,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:49,252 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:49,253 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. 2023-07-23 05:10:49,255 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. 2023-07-23 05:10:49,257 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=18 updating hbase:meta row=a30738775011c63d57d74eb291843655, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:49,257 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690089048523.a30738775011c63d57d74eb291843655.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089049256"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089049256"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089049256"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089049256"}]},"ts":"1690089049256"} 2023-07-23 05:10:49,257 INFO [StoreOpener-5c5ed9bf96bfd001c34b57b1293ac322-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:49,261 DEBUG [StoreOpener-5c5ed9bf96bfd001c34b57b1293ac322-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322/f 2023-07-23 05:10:49,261 DEBUG [StoreOpener-5c5ed9bf96bfd001c34b57b1293ac322-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322/f 2023-07-23 05:10:49,262 INFO [StoreOpener-5c5ed9bf96bfd001c34b57b1293ac322-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5c5ed9bf96bfd001c34b57b1293ac322 columnFamilyName f 2023-07-23 05:10:49,263 INFO [StoreOpener-5c5ed9bf96bfd001c34b57b1293ac322-1] regionserver.HStore(310): Store=5c5ed9bf96bfd001c34b57b1293ac322/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:49,264 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:49,266 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:49,267 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=20 2023-07-23 05:10:49,267 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=20, state=SUCCESS; OpenRegionProcedure de03e3a438580d72768668169e6084d9, server=jenkins-hbase4.apache.org,46173,1690089043304 in 222 msec 2023-07-23 05:10:49,269 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=18 2023-07-23 05:10:49,269 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de03e3a438580d72768668169e6084d9, ASSIGN in 405 msec 2023-07-23 05:10:49,270 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=18, state=SUCCESS; OpenRegionProcedure a30738775011c63d57d74eb291843655, server=jenkins-hbase4.apache.org,45681,1690089042835 in 228 msec 2023-07-23 05:10:49,272 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:49,272 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a30738775011c63d57d74eb291843655, ASSIGN in 408 msec 2023-07-23 05:10:49,275 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:10:49,276 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5c5ed9bf96bfd001c34b57b1293ac322; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11715762080, jitterRate=0.09111537039279938}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:49,276 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5c5ed9bf96bfd001c34b57b1293ac322: 2023-07-23 05:10:49,277 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322., pid=21, masterSystemTime=1690089049185 2023-07-23 05:10:49,280 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. 2023-07-23 05:10:49,280 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. 2023-07-23 05:10:49,280 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=16 updating hbase:meta row=5c5ed9bf96bfd001c34b57b1293ac322, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:49,281 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089049280"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089049280"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089049280"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089049280"}]},"ts":"1690089049280"} 2023-07-23 05:10:49,287 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=16 2023-07-23 05:10:49,287 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=16, state=SUCCESS; OpenRegionProcedure 5c5ed9bf96bfd001c34b57b1293ac322, server=jenkins-hbase4.apache.org,46173,1690089043304 in 258 msec 2023-07-23 05:10:49,295 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=15 2023-07-23 05:10:49,296 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5ed9bf96bfd001c34b57b1293ac322, ASSIGN in 425 msec 2023-07-23 05:10:49,297 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 05:10:49,297 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089049297"}]},"ts":"1690089049297"} 2023-07-23 05:10:49,300 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-23 05:10:49,303 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 05:10:49,305 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 776 msec 2023-07-23 05:10:49,656 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-23 05:10:49,657 INFO [Listener at localhost/44477] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 15 completed 2023-07-23 05:10:49,657 DEBUG [Listener at localhost/44477] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-23 05:10:49,659 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:10:49,664 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37441] ipc.CallRunner(144): callId: 49 service: ClientService methodName: Scan size: 95 connection: 172.31.14.131:44304 deadline: 1690089109664, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46173 startCode=1690089043304. As of locationSeqNum=14. 2023-07-23 05:10:49,768 DEBUG [hconnection-0x2a5e2fc3-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 05:10:49,771 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37206, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 05:10:49,792 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-23 05:10:49,793 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:10:49,793 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-23 05:10:49,793 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:10:49,799 DEBUG [Listener at localhost/44477] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 05:10:49,815 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44328, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 05:10:49,818 DEBUG [Listener at localhost/44477] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 05:10:49,823 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32772, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 05:10:49,824 DEBUG [Listener at localhost/44477] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 05:10:49,831 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41054, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 05:10:49,833 DEBUG [Listener at localhost/44477] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 05:10:49,836 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37220, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 05:10:49,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-23 05:10:49,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 05:10:49,851 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:49,859 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:49,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:49,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:49,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:49,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:10:49,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:49,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(345): Moving region 5c5ed9bf96bfd001c34b57b1293ac322 to RSGroup Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:49,869 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:10:49,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:10:49,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:10:49,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:10:49,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:10:49,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5ed9bf96bfd001c34b57b1293ac322, REOPEN/MOVE 2023-07-23 05:10:49,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(345): Moving region 6955eb3f7d33cecac81a52c3cad1f458 to RSGroup Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:49,872 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5ed9bf96bfd001c34b57b1293ac322, REOPEN/MOVE 2023-07-23 05:10:49,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:10:49,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:10:49,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:10:49,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:10:49,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:10:49,875 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=5c5ed9bf96bfd001c34b57b1293ac322, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:49,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6955eb3f7d33cecac81a52c3cad1f458, REOPEN/MOVE 2023-07-23 05:10:49,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(345): Moving region a30738775011c63d57d74eb291843655 to RSGroup Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:49,876 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6955eb3f7d33cecac81a52c3cad1f458, REOPEN/MOVE 2023-07-23 05:10:49,876 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089049875"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089049875"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089049875"}]},"ts":"1690089049875"} 2023-07-23 05:10:49,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:10:49,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:10:49,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:10:49,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:10:49,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:10:49,878 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=6955eb3f7d33cecac81a52c3cad1f458, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:49,878 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089049878"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089049878"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089049878"}]},"ts":"1690089049878"} 2023-07-23 05:10:49,879 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a30738775011c63d57d74eb291843655, REOPEN/MOVE 2023-07-23 05:10:49,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(345): Moving region eee48e49fb983f0b754fb402c78f98d1 to RSGroup Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:49,880 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a30738775011c63d57d74eb291843655, REOPEN/MOVE 2023-07-23 05:10:49,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:10:49,880 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=29, ppid=26, state=RUNNABLE; CloseRegionProcedure 5c5ed9bf96bfd001c34b57b1293ac322, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:10:49,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:10:49,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:10:49,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:10:49,880 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:10:49,881 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=a30738775011c63d57d74eb291843655, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:49,882 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=31, ppid=27, state=RUNNABLE; CloseRegionProcedure 6955eb3f7d33cecac81a52c3cad1f458, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:10:49,882 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690089048523.a30738775011c63d57d74eb291843655.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089049881"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089049881"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089049881"}]},"ts":"1690089049881"} 2023-07-23 05:10:49,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eee48e49fb983f0b754fb402c78f98d1, REOPEN/MOVE 2023-07-23 05:10:49,886 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=28, state=RUNNABLE; CloseRegionProcedure a30738775011c63d57d74eb291843655, server=jenkins-hbase4.apache.org,45681,1690089042835}] 2023-07-23 05:10:49,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(345): Moving region de03e3a438580d72768668169e6084d9 to RSGroup Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:49,889 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eee48e49fb983f0b754fb402c78f98d1, REOPEN/MOVE 2023-07-23 05:10:49,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:10:49,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:10:49,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:10:49,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:10:49,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:10:49,891 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=eee48e49fb983f0b754fb402c78f98d1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:49,891 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089049891"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089049891"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089049891"}]},"ts":"1690089049891"} 2023-07-23 05:10:49,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de03e3a438580d72768668169e6084d9, REOPEN/MOVE 2023-07-23 05:10:49,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_595606539, current retry=0 2023-07-23 05:10:49,893 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de03e3a438580d72768668169e6084d9, REOPEN/MOVE 2023-07-23 05:10:49,896 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=de03e3a438580d72768668169e6084d9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:49,896 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=30, state=RUNNABLE; CloseRegionProcedure eee48e49fb983f0b754fb402c78f98d1, server=jenkins-hbase4.apache.org,45681,1690089042835}] 2023-07-23 05:10:49,896 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089049896"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089049896"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089049896"}]},"ts":"1690089049896"} 2023-07-23 05:10:49,899 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=33, state=RUNNABLE; CloseRegionProcedure de03e3a438580d72768668169e6084d9, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:10:50,036 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:50,038 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5c5ed9bf96bfd001c34b57b1293ac322, disabling compactions & flushes 2023-07-23 05:10:50,038 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. 2023-07-23 05:10:50,038 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. 2023-07-23 05:10:50,038 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. after waiting 0 ms 2023-07-23 05:10:50,038 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. 2023-07-23 05:10:50,049 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:50,054 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing eee48e49fb983f0b754fb402c78f98d1, disabling compactions & flushes 2023-07-23 05:10:50,054 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. 2023-07-23 05:10:50,054 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. 2023-07-23 05:10:50,054 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. after waiting 0 ms 2023-07-23 05:10:50,054 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. 2023-07-23 05:10:50,059 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:10:50,061 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. 2023-07-23 05:10:50,061 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5c5ed9bf96bfd001c34b57b1293ac322: 2023-07-23 05:10:50,061 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 5c5ed9bf96bfd001c34b57b1293ac322 move to jenkins-hbase4.apache.org,41981,1690089047062 record at close sequenceid=2 2023-07-23 05:10:50,067 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:10:50,068 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:50,068 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close de03e3a438580d72768668169e6084d9 2023-07-23 05:10:50,068 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. 2023-07-23 05:10:50,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing de03e3a438580d72768668169e6084d9, disabling compactions & flushes 2023-07-23 05:10:50,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for eee48e49fb983f0b754fb402c78f98d1: 2023-07-23 05:10:50,070 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. 2023-07-23 05:10:50,070 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding eee48e49fb983f0b754fb402c78f98d1 move to jenkins-hbase4.apache.org,37441,1690089043078 record at close sequenceid=2 2023-07-23 05:10:50,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. 2023-07-23 05:10:50,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. after waiting 0 ms 2023-07-23 05:10:50,070 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. 2023-07-23 05:10:50,070 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=5c5ed9bf96bfd001c34b57b1293ac322, regionState=CLOSED 2023-07-23 05:10:50,070 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089050070"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089050070"}]},"ts":"1690089050070"} 2023-07-23 05:10:50,076 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:50,076 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a30738775011c63d57d74eb291843655 2023-07-23 05:10:50,077 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=eee48e49fb983f0b754fb402c78f98d1, regionState=CLOSED 2023-07-23 05:10:50,077 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089050077"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089050077"}]},"ts":"1690089050077"} 2023-07-23 05:10:50,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a30738775011c63d57d74eb291843655, disabling compactions & flushes 2023-07-23 05:10:50,079 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. 2023-07-23 05:10:50,079 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. 2023-07-23 05:10:50,080 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. after waiting 0 ms 2023-07-23 05:10:50,080 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. 2023-07-23 05:10:50,080 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=29, resume processing ppid=26 2023-07-23 05:10:50,080 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=29, ppid=26, state=SUCCESS; CloseRegionProcedure 5c5ed9bf96bfd001c34b57b1293ac322, server=jenkins-hbase4.apache.org,46173,1690089043304 in 193 msec 2023-07-23 05:10:50,081 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:10:50,082 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. 2023-07-23 05:10:50,082 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for de03e3a438580d72768668169e6084d9: 2023-07-23 05:10:50,082 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding de03e3a438580d72768668169e6084d9 move to jenkins-hbase4.apache.org,41981,1690089047062 record at close sequenceid=2 2023-07-23 05:10:50,082 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=26, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5ed9bf96bfd001c34b57b1293ac322, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41981,1690089047062; forceNewPlan=false, retain=false 2023-07-23 05:10:50,085 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed de03e3a438580d72768668169e6084d9 2023-07-23 05:10:50,085 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:50,086 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6955eb3f7d33cecac81a52c3cad1f458, disabling compactions & flushes 2023-07-23 05:10:50,086 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. 2023-07-23 05:10:50,087 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. 2023-07-23 05:10:50,087 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. after waiting 0 ms 2023-07-23 05:10:50,087 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. 2023-07-23 05:10:50,087 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=30 2023-07-23 05:10:50,087 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=de03e3a438580d72768668169e6084d9, regionState=CLOSED 2023-07-23 05:10:50,088 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=30, state=SUCCESS; CloseRegionProcedure eee48e49fb983f0b754fb402c78f98d1, server=jenkins-hbase4.apache.org,45681,1690089042835 in 186 msec 2023-07-23 05:10:50,088 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089050087"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089050087"}]},"ts":"1690089050087"} 2023-07-23 05:10:50,089 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eee48e49fb983f0b754fb402c78f98d1, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37441,1690089043078; forceNewPlan=false, retain=false 2023-07-23 05:10:50,090 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:10:50,091 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. 2023-07-23 05:10:50,091 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a30738775011c63d57d74eb291843655: 2023-07-23 05:10:50,091 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding a30738775011c63d57d74eb291843655 move to jenkins-hbase4.apache.org,37441,1690089043078 record at close sequenceid=2 2023-07-23 05:10:50,097 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a30738775011c63d57d74eb291843655 2023-07-23 05:10:50,100 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=33 2023-07-23 05:10:50,100 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=33, state=SUCCESS; CloseRegionProcedure de03e3a438580d72768668169e6084d9, server=jenkins-hbase4.apache.org,46173,1690089043304 in 196 msec 2023-07-23 05:10:50,101 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de03e3a438580d72768668169e6084d9, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41981,1690089047062; forceNewPlan=false, retain=false 2023-07-23 05:10:50,108 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:10:50,109 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=a30738775011c63d57d74eb291843655, regionState=CLOSED 2023-07-23 05:10:50,109 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690089048523.a30738775011c63d57d74eb291843655.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089050109"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089050109"}]},"ts":"1690089050109"} 2023-07-23 05:10:50,109 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. 2023-07-23 05:10:50,110 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6955eb3f7d33cecac81a52c3cad1f458: 2023-07-23 05:10:50,110 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 6955eb3f7d33cecac81a52c3cad1f458 move to jenkins-hbase4.apache.org,37441,1690089043078 record at close sequenceid=2 2023-07-23 05:10:50,114 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:50,115 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=6955eb3f7d33cecac81a52c3cad1f458, regionState=CLOSED 2023-07-23 05:10:50,115 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089050115"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089050115"}]},"ts":"1690089050115"} 2023-07-23 05:10:50,117 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=28 2023-07-23 05:10:50,117 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=28, state=SUCCESS; CloseRegionProcedure a30738775011c63d57d74eb291843655, server=jenkins-hbase4.apache.org,45681,1690089042835 in 225 msec 2023-07-23 05:10:50,119 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=28, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a30738775011c63d57d74eb291843655, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37441,1690089043078; forceNewPlan=false, retain=false 2023-07-23 05:10:50,125 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=31, resume processing ppid=27 2023-07-23 05:10:50,125 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=31, ppid=27, state=SUCCESS; CloseRegionProcedure 6955eb3f7d33cecac81a52c3cad1f458, server=jenkins-hbase4.apache.org,46173,1690089043304 in 237 msec 2023-07-23 05:10:50,126 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=27, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6955eb3f7d33cecac81a52c3cad1f458, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37441,1690089043078; forceNewPlan=false, retain=false 2023-07-23 05:10:50,233 INFO [jenkins-hbase4:37433] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-23 05:10:50,233 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=eee48e49fb983f0b754fb402c78f98d1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:50,233 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=6955eb3f7d33cecac81a52c3cad1f458, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:50,233 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=a30738775011c63d57d74eb291843655, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:50,233 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=de03e3a438580d72768668169e6084d9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:50,233 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=5c5ed9bf96bfd001c34b57b1293ac322, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:50,234 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690089048523.a30738775011c63d57d74eb291843655.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089050233"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089050233"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089050233"}]},"ts":"1690089050233"} 2023-07-23 05:10:50,234 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089050233"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089050233"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089050233"}]},"ts":"1690089050233"} 2023-07-23 05:10:50,234 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089050233"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089050233"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089050233"}]},"ts":"1690089050233"} 2023-07-23 05:10:50,234 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089050233"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089050233"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089050233"}]},"ts":"1690089050233"} 2023-07-23 05:10:50,234 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089050233"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089050233"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089050233"}]},"ts":"1690089050233"} 2023-07-23 05:10:50,237 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=27, state=RUNNABLE; OpenRegionProcedure 6955eb3f7d33cecac81a52c3cad1f458, server=jenkins-hbase4.apache.org,37441,1690089043078}] 2023-07-23 05:10:50,239 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=28, state=RUNNABLE; OpenRegionProcedure a30738775011c63d57d74eb291843655, server=jenkins-hbase4.apache.org,37441,1690089043078}] 2023-07-23 05:10:50,240 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=30, state=RUNNABLE; OpenRegionProcedure eee48e49fb983f0b754fb402c78f98d1, server=jenkins-hbase4.apache.org,37441,1690089043078}] 2023-07-23 05:10:50,241 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=26, state=RUNNABLE; OpenRegionProcedure 5c5ed9bf96bfd001c34b57b1293ac322, server=jenkins-hbase4.apache.org,41981,1690089047062}] 2023-07-23 05:10:50,243 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=33, state=RUNNABLE; OpenRegionProcedure de03e3a438580d72768668169e6084d9, server=jenkins-hbase4.apache.org,41981,1690089047062}] 2023-07-23 05:10:50,394 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:50,394 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 05:10:50,396 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:57964, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 05:10:50,399 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. 2023-07-23 05:10:50,399 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => eee48e49fb983f0b754fb402c78f98d1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-23 05:10:50,400 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:50,400 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:50,400 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:50,400 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:50,402 INFO [StoreOpener-eee48e49fb983f0b754fb402c78f98d1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:50,404 DEBUG [StoreOpener-eee48e49fb983f0b754fb402c78f98d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1/f 2023-07-23 05:10:50,404 DEBUG [StoreOpener-eee48e49fb983f0b754fb402c78f98d1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1/f 2023-07-23 05:10:50,404 INFO [StoreOpener-eee48e49fb983f0b754fb402c78f98d1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region eee48e49fb983f0b754fb402c78f98d1 columnFamilyName f 2023-07-23 05:10:50,405 INFO [StoreOpener-eee48e49fb983f0b754fb402c78f98d1-1] regionserver.HStore(310): Store=eee48e49fb983f0b754fb402c78f98d1/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:50,405 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. 2023-07-23 05:10:50,407 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => de03e3a438580d72768668169e6084d9, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-23 05:10:50,407 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop de03e3a438580d72768668169e6084d9 2023-07-23 05:10:50,407 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:50,407 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for de03e3a438580d72768668169e6084d9 2023-07-23 05:10:50,407 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for de03e3a438580d72768668169e6084d9 2023-07-23 05:10:50,408 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:50,409 INFO [StoreOpener-de03e3a438580d72768668169e6084d9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region de03e3a438580d72768668169e6084d9 2023-07-23 05:10:50,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:50,410 DEBUG [StoreOpener-de03e3a438580d72768668169e6084d9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9/f 2023-07-23 05:10:50,411 DEBUG [StoreOpener-de03e3a438580d72768668169e6084d9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9/f 2023-07-23 05:10:50,411 INFO [StoreOpener-de03e3a438580d72768668169e6084d9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region de03e3a438580d72768668169e6084d9 columnFamilyName f 2023-07-23 05:10:50,412 INFO [StoreOpener-de03e3a438580d72768668169e6084d9-1] regionserver.HStore(310): Store=de03e3a438580d72768668169e6084d9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:50,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9 2023-07-23 05:10:50,419 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9 2023-07-23 05:10:50,424 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for de03e3a438580d72768668169e6084d9 2023-07-23 05:10:50,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:50,425 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened de03e3a438580d72768668169e6084d9; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9551226720, jitterRate=-0.11047269403934479}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:50,425 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for de03e3a438580d72768668169e6084d9: 2023-07-23 05:10:50,426 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened eee48e49fb983f0b754fb402c78f98d1; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11536260000, jitterRate=0.07439793646335602}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:50,427 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for eee48e49fb983f0b754fb402c78f98d1: 2023-07-23 05:10:50,427 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9., pid=40, masterSystemTime=1690089050394 2023-07-23 05:10:50,430 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1., pid=38, masterSystemTime=1690089050391 2023-07-23 05:10:50,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. 2023-07-23 05:10:50,433 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. 2023-07-23 05:10:50,433 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. 2023-07-23 05:10:50,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5c5ed9bf96bfd001c34b57b1293ac322, NAME => 'Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-23 05:10:50,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:50,434 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:50,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:50,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:50,435 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=de03e3a438580d72768668169e6084d9, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:50,435 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089050435"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089050435"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089050435"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089050435"}]},"ts":"1690089050435"} 2023-07-23 05:10:50,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. 2023-07-23 05:10:50,436 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. 2023-07-23 05:10:50,436 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. 2023-07-23 05:10:50,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6955eb3f7d33cecac81a52c3cad1f458, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-23 05:10:50,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:50,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:50,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:50,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:50,437 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=eee48e49fb983f0b754fb402c78f98d1, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:50,439 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089050437"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089050437"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089050437"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089050437"}]},"ts":"1690089050437"} 2023-07-23 05:10:50,445 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=33 2023-07-23 05:10:50,445 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=33, state=SUCCESS; OpenRegionProcedure de03e3a438580d72768668169e6084d9, server=jenkins-hbase4.apache.org,41981,1690089047062 in 197 msec 2023-07-23 05:10:50,446 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=30 2023-07-23 05:10:50,446 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=30, state=SUCCESS; OpenRegionProcedure eee48e49fb983f0b754fb402c78f98d1, server=jenkins-hbase4.apache.org,37441,1690089043078 in 202 msec 2023-07-23 05:10:50,448 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de03e3a438580d72768668169e6084d9, REOPEN/MOVE in 555 msec 2023-07-23 05:10:50,449 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eee48e49fb983f0b754fb402c78f98d1, REOPEN/MOVE in 566 msec 2023-07-23 05:10:50,449 INFO [StoreOpener-5c5ed9bf96bfd001c34b57b1293ac322-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:50,450 INFO [StoreOpener-6955eb3f7d33cecac81a52c3cad1f458-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:50,451 DEBUG [StoreOpener-5c5ed9bf96bfd001c34b57b1293ac322-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322/f 2023-07-23 05:10:50,451 DEBUG [StoreOpener-5c5ed9bf96bfd001c34b57b1293ac322-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322/f 2023-07-23 05:10:50,451 DEBUG [StoreOpener-6955eb3f7d33cecac81a52c3cad1f458-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458/f 2023-07-23 05:10:50,451 DEBUG [StoreOpener-6955eb3f7d33cecac81a52c3cad1f458-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458/f 2023-07-23 05:10:50,452 INFO [StoreOpener-6955eb3f7d33cecac81a52c3cad1f458-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6955eb3f7d33cecac81a52c3cad1f458 columnFamilyName f 2023-07-23 05:10:50,452 INFO [StoreOpener-5c5ed9bf96bfd001c34b57b1293ac322-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5c5ed9bf96bfd001c34b57b1293ac322 columnFamilyName f 2023-07-23 05:10:50,452 INFO [StoreOpener-6955eb3f7d33cecac81a52c3cad1f458-1] regionserver.HStore(310): Store=6955eb3f7d33cecac81a52c3cad1f458/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:50,452 INFO [StoreOpener-5c5ed9bf96bfd001c34b57b1293ac322-1] regionserver.HStore(310): Store=5c5ed9bf96bfd001c34b57b1293ac322/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:50,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:50,454 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:50,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:50,456 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:50,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:50,460 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:50,462 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5c5ed9bf96bfd001c34b57b1293ac322; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10616760160, jitterRate=-0.011237159371376038}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:50,462 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5c5ed9bf96bfd001c34b57b1293ac322: 2023-07-23 05:10:50,463 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322., pid=39, masterSystemTime=1690089050394 2023-07-23 05:10:50,464 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6955eb3f7d33cecac81a52c3cad1f458; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11903924800, jitterRate=0.10863938927650452}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:50,464 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6955eb3f7d33cecac81a52c3cad1f458: 2023-07-23 05:10:50,466 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458., pid=36, masterSystemTime=1690089050391 2023-07-23 05:10:50,467 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. 2023-07-23 05:10:50,467 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. 2023-07-23 05:10:50,469 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=26 updating hbase:meta row=5c5ed9bf96bfd001c34b57b1293ac322, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:50,469 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089050469"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089050469"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089050469"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089050469"}]},"ts":"1690089050469"} 2023-07-23 05:10:50,470 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. 2023-07-23 05:10:50,470 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. 2023-07-23 05:10:50,470 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. 2023-07-23 05:10:50,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a30738775011c63d57d74eb291843655, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-23 05:10:50,471 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=27 updating hbase:meta row=6955eb3f7d33cecac81a52c3cad1f458, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:50,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop a30738775011c63d57d74eb291843655 2023-07-23 05:10:50,471 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089050471"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089050471"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089050471"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089050471"}]},"ts":"1690089050471"} 2023-07-23 05:10:50,471 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:50,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a30738775011c63d57d74eb291843655 2023-07-23 05:10:50,472 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a30738775011c63d57d74eb291843655 2023-07-23 05:10:50,477 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=26 2023-07-23 05:10:50,477 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=26, state=SUCCESS; OpenRegionProcedure 5c5ed9bf96bfd001c34b57b1293ac322, server=jenkins-hbase4.apache.org,41981,1690089047062 in 230 msec 2023-07-23 05:10:50,477 INFO [StoreOpener-a30738775011c63d57d74eb291843655-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a30738775011c63d57d74eb291843655 2023-07-23 05:10:50,478 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=27 2023-07-23 05:10:50,479 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=27, state=SUCCESS; OpenRegionProcedure 6955eb3f7d33cecac81a52c3cad1f458, server=jenkins-hbase4.apache.org,37441,1690089043078 in 237 msec 2023-07-23 05:10:50,479 DEBUG [StoreOpener-a30738775011c63d57d74eb291843655-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655/f 2023-07-23 05:10:50,479 DEBUG [StoreOpener-a30738775011c63d57d74eb291843655-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655/f 2023-07-23 05:10:50,480 INFO [StoreOpener-a30738775011c63d57d74eb291843655-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a30738775011c63d57d74eb291843655 columnFamilyName f 2023-07-23 05:10:50,480 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=26, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5ed9bf96bfd001c34b57b1293ac322, REOPEN/MOVE in 607 msec 2023-07-23 05:10:50,481 INFO [StoreOpener-a30738775011c63d57d74eb291843655-1] regionserver.HStore(310): Store=a30738775011c63d57d74eb291843655/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:50,481 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=27, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6955eb3f7d33cecac81a52c3cad1f458, REOPEN/MOVE in 606 msec 2023-07-23 05:10:50,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655 2023-07-23 05:10:50,484 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655 2023-07-23 05:10:50,488 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a30738775011c63d57d74eb291843655 2023-07-23 05:10:50,489 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a30738775011c63d57d74eb291843655; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11668025600, jitterRate=0.08666956424713135}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:50,489 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a30738775011c63d57d74eb291843655: 2023-07-23 05:10:50,491 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655., pid=37, masterSystemTime=1690089050391 2023-07-23 05:10:50,493 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. 2023-07-23 05:10:50,493 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. 2023-07-23 05:10:50,494 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=28 updating hbase:meta row=a30738775011c63d57d74eb291843655, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:50,494 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690089048523.a30738775011c63d57d74eb291843655.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089050494"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089050494"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089050494"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089050494"}]},"ts":"1690089050494"} 2023-07-23 05:10:50,499 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=28 2023-07-23 05:10:50,499 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=28, state=SUCCESS; OpenRegionProcedure a30738775011c63d57d74eb291843655, server=jenkins-hbase4.apache.org,37441,1690089043078 in 259 msec 2023-07-23 05:10:50,501 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=28, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a30738775011c63d57d74eb291843655, REOPEN/MOVE in 622 msec 2023-07-23 05:10:50,894 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] procedure.ProcedureSyncWait(216): waitFor pid=26 2023-07-23 05:10:50,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_595606539. 2023-07-23 05:10:50,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:10:50,898 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:50,899 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:50,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-23 05:10:50,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 05:10:50,903 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:10:50,910 INFO [Listener at localhost/44477] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-23 05:10:50,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-23 05:10:50,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=41, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 05:10:50,931 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089050931"}]},"ts":"1690089050931"} 2023-07-23 05:10:50,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-23 05:10:50,933 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-23 05:10:50,935 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-23 05:10:50,937 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5ed9bf96bfd001c34b57b1293ac322, UNASSIGN}, {pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6955eb3f7d33cecac81a52c3cad1f458, UNASSIGN}, {pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a30738775011c63d57d74eb291843655, UNASSIGN}, {pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eee48e49fb983f0b754fb402c78f98d1, UNASSIGN}, {pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de03e3a438580d72768668169e6084d9, UNASSIGN}] 2023-07-23 05:10:50,940 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=44, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a30738775011c63d57d74eb291843655, UNASSIGN 2023-07-23 05:10:50,940 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=43, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6955eb3f7d33cecac81a52c3cad1f458, UNASSIGN 2023-07-23 05:10:50,940 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=42, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5ed9bf96bfd001c34b57b1293ac322, UNASSIGN 2023-07-23 05:10:50,940 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de03e3a438580d72768668169e6084d9, UNASSIGN 2023-07-23 05:10:50,941 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=41, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eee48e49fb983f0b754fb402c78f98d1, UNASSIGN 2023-07-23 05:10:50,943 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=a30738775011c63d57d74eb291843655, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:50,943 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=5c5ed9bf96bfd001c34b57b1293ac322, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:50,943 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=eee48e49fb983f0b754fb402c78f98d1, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:50,943 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=de03e3a438580d72768668169e6084d9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:50,943 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=6955eb3f7d33cecac81a52c3cad1f458, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:50,943 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089050943"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089050943"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089050943"}]},"ts":"1690089050943"} 2023-07-23 05:10:50,943 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089050943"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089050943"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089050943"}]},"ts":"1690089050943"} 2023-07-23 05:10:50,943 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089050943"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089050943"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089050943"}]},"ts":"1690089050943"} 2023-07-23 05:10:50,943 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690089048523.a30738775011c63d57d74eb291843655.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089050943"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089050943"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089050943"}]},"ts":"1690089050943"} 2023-07-23 05:10:50,943 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089050943"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089050943"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089050943"}]},"ts":"1690089050943"} 2023-07-23 05:10:50,946 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=47, ppid=43, state=RUNNABLE; CloseRegionProcedure 6955eb3f7d33cecac81a52c3cad1f458, server=jenkins-hbase4.apache.org,37441,1690089043078}] 2023-07-23 05:10:50,947 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=48, ppid=45, state=RUNNABLE; CloseRegionProcedure eee48e49fb983f0b754fb402c78f98d1, server=jenkins-hbase4.apache.org,37441,1690089043078}] 2023-07-23 05:10:50,948 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=49, ppid=42, state=RUNNABLE; CloseRegionProcedure 5c5ed9bf96bfd001c34b57b1293ac322, server=jenkins-hbase4.apache.org,41981,1690089047062}] 2023-07-23 05:10:50,950 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=44, state=RUNNABLE; CloseRegionProcedure a30738775011c63d57d74eb291843655, server=jenkins-hbase4.apache.org,37441,1690089043078}] 2023-07-23 05:10:50,951 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=46, state=RUNNABLE; CloseRegionProcedure de03e3a438580d72768668169e6084d9, server=jenkins-hbase4.apache.org,41981,1690089047062}] 2023-07-23 05:10:51,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-23 05:10:51,108 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:51,108 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a30738775011c63d57d74eb291843655 2023-07-23 05:10:51,109 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5c5ed9bf96bfd001c34b57b1293ac322, disabling compactions & flushes 2023-07-23 05:10:51,110 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a30738775011c63d57d74eb291843655, disabling compactions & flushes 2023-07-23 05:10:51,110 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. 2023-07-23 05:10:51,110 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. 2023-07-23 05:10:51,110 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. 2023-07-23 05:10:51,110 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. 2023-07-23 05:10:51,110 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. after waiting 0 ms 2023-07-23 05:10:51,110 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. after waiting 0 ms 2023-07-23 05:10:51,110 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. 2023-07-23 05:10:51,110 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. 2023-07-23 05:10:51,121 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 05:10:51,122 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 05:10:51,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655. 2023-07-23 05:10:51,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a30738775011c63d57d74eb291843655: 2023-07-23 05:10:51,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322. 2023-07-23 05:10:51,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5c5ed9bf96bfd001c34b57b1293ac322: 2023-07-23 05:10:51,126 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a30738775011c63d57d74eb291843655 2023-07-23 05:10:51,126 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:51,126 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6955eb3f7d33cecac81a52c3cad1f458, disabling compactions & flushes 2023-07-23 05:10:51,127 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. 2023-07-23 05:10:51,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. 2023-07-23 05:10:51,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. after waiting 0 ms 2023-07-23 05:10:51,127 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. 2023-07-23 05:10:51,127 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=44 updating hbase:meta row=a30738775011c63d57d74eb291843655, regionState=CLOSED 2023-07-23 05:10:51,127 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690089048523.a30738775011c63d57d74eb291843655.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089051127"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089051127"}]},"ts":"1690089051127"} 2023-07-23 05:10:51,128 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:51,128 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close de03e3a438580d72768668169e6084d9 2023-07-23 05:10:51,129 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing de03e3a438580d72768668169e6084d9, disabling compactions & flushes 2023-07-23 05:10:51,129 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. 2023-07-23 05:10:51,129 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. 2023-07-23 05:10:51,129 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. after waiting 0 ms 2023-07-23 05:10:51,129 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. 2023-07-23 05:10:51,132 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=42 updating hbase:meta row=5c5ed9bf96bfd001c34b57b1293ac322, regionState=CLOSED 2023-07-23 05:10:51,133 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089051132"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089051132"}]},"ts":"1690089051132"} 2023-07-23 05:10:51,134 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 05:10:51,139 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458. 2023-07-23 05:10:51,139 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6955eb3f7d33cecac81a52c3cad1f458: 2023-07-23 05:10:51,139 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 05:10:51,140 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9. 2023-07-23 05:10:51,140 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for de03e3a438580d72768668169e6084d9: 2023-07-23 05:10:51,141 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=44 2023-07-23 05:10:51,141 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=44, state=SUCCESS; CloseRegionProcedure a30738775011c63d57d74eb291843655, server=jenkins-hbase4.apache.org,37441,1690089043078 in 180 msec 2023-07-23 05:10:51,143 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=42 2023-07-23 05:10:51,143 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=42, state=SUCCESS; CloseRegionProcedure 5c5ed9bf96bfd001c34b57b1293ac322, server=jenkins-hbase4.apache.org,41981,1690089047062 in 187 msec 2023-07-23 05:10:51,144 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:51,144 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:51,144 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=44, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a30738775011c63d57d74eb291843655, UNASSIGN in 204 msec 2023-07-23 05:10:51,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing eee48e49fb983f0b754fb402c78f98d1, disabling compactions & flushes 2023-07-23 05:10:51,145 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. 2023-07-23 05:10:51,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. 2023-07-23 05:10:51,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. after waiting 0 ms 2023-07-23 05:10:51,145 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. 2023-07-23 05:10:51,145 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=43 updating hbase:meta row=6955eb3f7d33cecac81a52c3cad1f458, regionState=CLOSED 2023-07-23 05:10:51,145 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089051145"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089051145"}]},"ts":"1690089051145"} 2023-07-23 05:10:51,147 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed de03e3a438580d72768668169e6084d9 2023-07-23 05:10:51,147 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5c5ed9bf96bfd001c34b57b1293ac322, UNASSIGN in 206 msec 2023-07-23 05:10:51,147 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=de03e3a438580d72768668169e6084d9, regionState=CLOSED 2023-07-23 05:10:51,147 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089051147"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089051147"}]},"ts":"1690089051147"} 2023-07-23 05:10:51,153 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=47, resume processing ppid=43 2023-07-23 05:10:51,153 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=43, state=SUCCESS; CloseRegionProcedure 6955eb3f7d33cecac81a52c3cad1f458, server=jenkins-hbase4.apache.org,37441,1690089043078 in 202 msec 2023-07-23 05:10:51,156 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 05:10:51,157 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1. 2023-07-23 05:10:51,157 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for eee48e49fb983f0b754fb402c78f98d1: 2023-07-23 05:10:51,159 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=46 2023-07-23 05:10:51,159 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6955eb3f7d33cecac81a52c3cad1f458, UNASSIGN in 216 msec 2023-07-23 05:10:51,159 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=46, state=SUCCESS; CloseRegionProcedure de03e3a438580d72768668169e6084d9, server=jenkins-hbase4.apache.org,41981,1690089047062 in 199 msec 2023-07-23 05:10:51,160 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:51,161 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=eee48e49fb983f0b754fb402c78f98d1, regionState=CLOSED 2023-07-23 05:10:51,161 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089051161"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089051161"}]},"ts":"1690089051161"} 2023-07-23 05:10:51,161 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=de03e3a438580d72768668169e6084d9, UNASSIGN in 222 msec 2023-07-23 05:10:51,166 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=48, resume processing ppid=45 2023-07-23 05:10:51,167 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=45, state=SUCCESS; CloseRegionProcedure eee48e49fb983f0b754fb402c78f98d1, server=jenkins-hbase4.apache.org,37441,1690089043078 in 216 msec 2023-07-23 05:10:51,171 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=45, resume processing ppid=41 2023-07-23 05:10:51,171 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=41, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=eee48e49fb983f0b754fb402c78f98d1, UNASSIGN in 230 msec 2023-07-23 05:10:51,173 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089051173"}]},"ts":"1690089051173"} 2023-07-23 05:10:51,175 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-23 05:10:51,177 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-23 05:10:51,180 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=41, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 261 msec 2023-07-23 05:10:51,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=41 2023-07-23 05:10:51,235 INFO [Listener at localhost/44477] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 41 completed 2023-07-23 05:10:51,237 INFO [Listener at localhost/44477] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-23 05:10:51,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-23 05:10:51,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=52, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-23 05:10:51,255 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-23 05:10:51,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-23 05:10:51,274 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:51,275 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:51,275 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:51,275 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655 2023-07-23 05:10:51,276 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9 2023-07-23 05:10:51,280 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458/f, FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458/recovered.edits] 2023-07-23 05:10:51,280 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9/f, FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9/recovered.edits] 2023-07-23 05:10:51,280 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1/f, FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1/recovered.edits] 2023-07-23 05:10:51,280 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322/f, FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322/recovered.edits] 2023-07-23 05:10:51,280 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655/f, FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655/recovered.edits] 2023-07-23 05:10:51,310 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-23 05:10:51,319 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655/recovered.edits/7.seqid to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/archive/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655/recovered.edits/7.seqid 2023-07-23 05:10:51,321 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458/recovered.edits/7.seqid to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/archive/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458/recovered.edits/7.seqid 2023-07-23 05:10:51,321 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9/recovered.edits/7.seqid to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/archive/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9/recovered.edits/7.seqid 2023-07-23 05:10:51,322 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a30738775011c63d57d74eb291843655 2023-07-23 05:10:51,324 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1/recovered.edits/7.seqid to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/archive/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1/recovered.edits/7.seqid 2023-07-23 05:10:51,324 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6955eb3f7d33cecac81a52c3cad1f458 2023-07-23 05:10:51,325 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/de03e3a438580d72768668169e6084d9 2023-07-23 05:10:51,325 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/eee48e49fb983f0b754fb402c78f98d1 2023-07-23 05:10:51,329 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322/recovered.edits/7.seqid to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/archive/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322/recovered.edits/7.seqid 2023-07-23 05:10:51,331 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5c5ed9bf96bfd001c34b57b1293ac322 2023-07-23 05:10:51,331 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-23 05:10:51,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-23 05:10:51,365 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-23 05:10:51,371 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-23 05:10:51,372 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-23 05:10:51,372 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089051372"}]},"ts":"9223372036854775807"} 2023-07-23 05:10:51,372 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089051372"}]},"ts":"9223372036854775807"} 2023-07-23 05:10:51,372 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690089048523.a30738775011c63d57d74eb291843655.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089051372"}]},"ts":"9223372036854775807"} 2023-07-23 05:10:51,372 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089051372"}]},"ts":"9223372036854775807"} 2023-07-23 05:10:51,372 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089051372"}]},"ts":"9223372036854775807"} 2023-07-23 05:10:51,376 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-23 05:10:51,376 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 5c5ed9bf96bfd001c34b57b1293ac322, NAME => 'Group_testTableMoveTruncateAndDrop,,1690089048523.5c5ed9bf96bfd001c34b57b1293ac322.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 6955eb3f7d33cecac81a52c3cad1f458, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690089048523.6955eb3f7d33cecac81a52c3cad1f458.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => a30738775011c63d57d74eb291843655, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089048523.a30738775011c63d57d74eb291843655.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => eee48e49fb983f0b754fb402c78f98d1, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089048523.eee48e49fb983f0b754fb402c78f98d1.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => de03e3a438580d72768668169e6084d9, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690089048523.de03e3a438580d72768668169e6084d9.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-23 05:10:51,376 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-23 05:10:51,376 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690089051376"}]},"ts":"9223372036854775807"} 2023-07-23 05:10:51,379 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-23 05:10:51,396 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6d815181d9023f8dc69cd8c24abfe908 2023-07-23 05:10:51,397 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8360c6a7edacba13a3c66fce15bde27d 2023-07-23 05:10:51,397 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/60e652f34d21c8d215e7757e954e6147 2023-07-23 05:10:51,397 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c63615dc0209f4fb87914eb4c514928f 2023-07-23 05:10:51,397 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6d815181d9023f8dc69cd8c24abfe908 empty. 2023-07-23 05:10:51,397 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fe937f95a34599ca17379f2ef7347d80 2023-07-23 05:10:51,398 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8360c6a7edacba13a3c66fce15bde27d empty. 2023-07-23 05:10:51,399 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c63615dc0209f4fb87914eb4c514928f empty. 2023-07-23 05:10:51,399 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fe937f95a34599ca17379f2ef7347d80 empty. 2023-07-23 05:10:51,399 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8360c6a7edacba13a3c66fce15bde27d 2023-07-23 05:10:51,399 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/60e652f34d21c8d215e7757e954e6147 empty. 2023-07-23 05:10:51,400 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6d815181d9023f8dc69cd8c24abfe908 2023-07-23 05:10:51,403 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c63615dc0209f4fb87914eb4c514928f 2023-07-23 05:10:51,403 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fe937f95a34599ca17379f2ef7347d80 2023-07-23 05:10:51,404 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/60e652f34d21c8d215e7757e954e6147 2023-07-23 05:10:51,404 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-23 05:10:51,414 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 05:10:51,415 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-23 05:10:51,415 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 05:10:51,415 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-23 05:10:51,416 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 05:10:51,416 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-23 05:10:51,417 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-23 05:10:51,418 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-23 05:10:51,457 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-23 05:10:51,459 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 60e652f34d21c8d215e7757e954e6147, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:10:51,459 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 8360c6a7edacba13a3c66fce15bde27d, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:10:51,459 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6d815181d9023f8dc69cd8c24abfe908, NAME => 'Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:10:51,495 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:51,495 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 6d815181d9023f8dc69cd8c24abfe908, disabling compactions & flushes 2023-07-23 05:10:51,495 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908. 2023-07-23 05:10:51,495 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908. 2023-07-23 05:10:51,495 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908. after waiting 0 ms 2023-07-23 05:10:51,495 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908. 2023-07-23 05:10:51,495 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908. 2023-07-23 05:10:51,495 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 6d815181d9023f8dc69cd8c24abfe908: 2023-07-23 05:10:51,496 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => c63615dc0209f4fb87914eb4c514928f, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:10:51,497 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:51,497 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 60e652f34d21c8d215e7757e954e6147, disabling compactions & flushes 2023-07-23 05:10:51,497 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147. 2023-07-23 05:10:51,497 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147. 2023-07-23 05:10:51,497 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147. after waiting 0 ms 2023-07-23 05:10:51,497 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147. 2023-07-23 05:10:51,497 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147. 2023-07-23 05:10:51,497 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 60e652f34d21c8d215e7757e954e6147: 2023-07-23 05:10:51,497 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => fe937f95a34599ca17379f2ef7347d80, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:10:51,500 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:51,500 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 8360c6a7edacba13a3c66fce15bde27d, disabling compactions & flushes 2023-07-23 05:10:51,500 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d. 2023-07-23 05:10:51,500 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d. 2023-07-23 05:10:51,501 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d. after waiting 0 ms 2023-07-23 05:10:51,501 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d. 2023-07-23 05:10:51,501 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d. 2023-07-23 05:10:51,501 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 8360c6a7edacba13a3c66fce15bde27d: 2023-07-23 05:10:51,522 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:51,522 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing c63615dc0209f4fb87914eb4c514928f, disabling compactions & flushes 2023-07-23 05:10:51,522 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f. 2023-07-23 05:10:51,522 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f. 2023-07-23 05:10:51,523 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f. after waiting 0 ms 2023-07-23 05:10:51,523 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f. 2023-07-23 05:10:51,523 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f. 2023-07-23 05:10:51,523 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for c63615dc0209f4fb87914eb4c514928f: 2023-07-23 05:10:51,525 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:51,526 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing fe937f95a34599ca17379f2ef7347d80, disabling compactions & flushes 2023-07-23 05:10:51,526 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80. 2023-07-23 05:10:51,526 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80. 2023-07-23 05:10:51,526 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80. after waiting 0 ms 2023-07-23 05:10:51,526 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80. 2023-07-23 05:10:51,526 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80. 2023-07-23 05:10:51,526 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for fe937f95a34599ca17379f2ef7347d80: 2023-07-23 05:10:51,531 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089051531"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089051531"}]},"ts":"1690089051531"} 2023-07-23 05:10:51,531 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089051531"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089051531"}]},"ts":"1690089051531"} 2023-07-23 05:10:51,531 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089051531"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089051531"}]},"ts":"1690089051531"} 2023-07-23 05:10:51,531 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089051531"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089051531"}]},"ts":"1690089051531"} 2023-07-23 05:10:51,531 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089051531"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089051531"}]},"ts":"1690089051531"} 2023-07-23 05:10:51,535 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-23 05:10:51,536 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089051536"}]},"ts":"1690089051536"} 2023-07-23 05:10:51,538 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-23 05:10:51,544 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:10:51,544 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:10:51,544 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:10:51,544 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:10:51,547 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6d815181d9023f8dc69cd8c24abfe908, ASSIGN}, {pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8360c6a7edacba13a3c66fce15bde27d, ASSIGN}, {pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=60e652f34d21c8d215e7757e954e6147, ASSIGN}, {pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c63615dc0209f4fb87914eb4c514928f, ASSIGN}, {pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fe937f95a34599ca17379f2ef7347d80, ASSIGN}] 2023-07-23 05:10:51,549 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fe937f95a34599ca17379f2ef7347d80, ASSIGN 2023-07-23 05:10:51,549 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6d815181d9023f8dc69cd8c24abfe908, ASSIGN 2023-07-23 05:10:51,549 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8360c6a7edacba13a3c66fce15bde27d, ASSIGN 2023-07-23 05:10:51,549 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c63615dc0209f4fb87914eb4c514928f, ASSIGN 2023-07-23 05:10:51,549 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=60e652f34d21c8d215e7757e954e6147, ASSIGN 2023-07-23 05:10:51,550 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fe937f95a34599ca17379f2ef7347d80, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37441,1690089043078; forceNewPlan=false, retain=false 2023-07-23 05:10:51,550 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=53, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6d815181d9023f8dc69cd8c24abfe908, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37441,1690089043078; forceNewPlan=false, retain=false 2023-07-23 05:10:51,551 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c63615dc0209f4fb87914eb4c514928f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41981,1690089047062; forceNewPlan=false, retain=false 2023-07-23 05:10:51,551 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=54, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8360c6a7edacba13a3c66fce15bde27d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37441,1690089043078; forceNewPlan=false, retain=false 2023-07-23 05:10:51,551 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=55, ppid=52, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=60e652f34d21c8d215e7757e954e6147, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41981,1690089047062; forceNewPlan=false, retain=false 2023-07-23 05:10:51,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-23 05:10:51,700 INFO [jenkins-hbase4:37433] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-23 05:10:51,704 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=60e652f34d21c8d215e7757e954e6147, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:51,704 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=c63615dc0209f4fb87914eb4c514928f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:51,704 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089051704"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089051704"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089051704"}]},"ts":"1690089051704"} 2023-07-23 05:10:51,704 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089051704"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089051704"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089051704"}]},"ts":"1690089051704"} 2023-07-23 05:10:51,705 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=fe937f95a34599ca17379f2ef7347d80, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:51,705 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=8360c6a7edacba13a3c66fce15bde27d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:51,705 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089051705"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089051705"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089051705"}]},"ts":"1690089051705"} 2023-07-23 05:10:51,705 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=6d815181d9023f8dc69cd8c24abfe908, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:51,705 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089051705"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089051705"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089051705"}]},"ts":"1690089051705"} 2023-07-23 05:10:51,705 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089051705"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089051705"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089051705"}]},"ts":"1690089051705"} 2023-07-23 05:10:51,708 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=58, ppid=55, state=RUNNABLE; OpenRegionProcedure 60e652f34d21c8d215e7757e954e6147, server=jenkins-hbase4.apache.org,41981,1690089047062}] 2023-07-23 05:10:51,710 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=59, ppid=56, state=RUNNABLE; OpenRegionProcedure c63615dc0209f4fb87914eb4c514928f, server=jenkins-hbase4.apache.org,41981,1690089047062}] 2023-07-23 05:10:51,711 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=60, ppid=57, state=RUNNABLE; OpenRegionProcedure fe937f95a34599ca17379f2ef7347d80, server=jenkins-hbase4.apache.org,37441,1690089043078}] 2023-07-23 05:10:51,712 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=54, state=RUNNABLE; OpenRegionProcedure 8360c6a7edacba13a3c66fce15bde27d, server=jenkins-hbase4.apache.org,37441,1690089043078}] 2023-07-23 05:10:51,713 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=53, state=RUNNABLE; OpenRegionProcedure 6d815181d9023f8dc69cd8c24abfe908, server=jenkins-hbase4.apache.org,37441,1690089043078}] 2023-07-23 05:10:51,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-23 05:10:51,866 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147. 2023-07-23 05:10:51,866 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 60e652f34d21c8d215e7757e954e6147, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-23 05:10:51,867 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 60e652f34d21c8d215e7757e954e6147 2023-07-23 05:10:51,867 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:51,867 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 60e652f34d21c8d215e7757e954e6147 2023-07-23 05:10:51,867 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 60e652f34d21c8d215e7757e954e6147 2023-07-23 05:10:51,868 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80. 2023-07-23 05:10:51,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fe937f95a34599ca17379f2ef7347d80, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-23 05:10:51,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop fe937f95a34599ca17379f2ef7347d80 2023-07-23 05:10:51,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:51,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fe937f95a34599ca17379f2ef7347d80 2023-07-23 05:10:51,868 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fe937f95a34599ca17379f2ef7347d80 2023-07-23 05:10:51,868 INFO [StoreOpener-60e652f34d21c8d215e7757e954e6147-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 60e652f34d21c8d215e7757e954e6147 2023-07-23 05:10:51,870 INFO [StoreOpener-fe937f95a34599ca17379f2ef7347d80-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region fe937f95a34599ca17379f2ef7347d80 2023-07-23 05:10:51,871 DEBUG [StoreOpener-60e652f34d21c8d215e7757e954e6147-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/60e652f34d21c8d215e7757e954e6147/f 2023-07-23 05:10:51,871 DEBUG [StoreOpener-60e652f34d21c8d215e7757e954e6147-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/60e652f34d21c8d215e7757e954e6147/f 2023-07-23 05:10:51,872 INFO [StoreOpener-60e652f34d21c8d215e7757e954e6147-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 60e652f34d21c8d215e7757e954e6147 columnFamilyName f 2023-07-23 05:10:51,872 DEBUG [StoreOpener-fe937f95a34599ca17379f2ef7347d80-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/fe937f95a34599ca17379f2ef7347d80/f 2023-07-23 05:10:51,872 DEBUG [StoreOpener-fe937f95a34599ca17379f2ef7347d80-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/fe937f95a34599ca17379f2ef7347d80/f 2023-07-23 05:10:51,872 INFO [StoreOpener-fe937f95a34599ca17379f2ef7347d80-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fe937f95a34599ca17379f2ef7347d80 columnFamilyName f 2023-07-23 05:10:51,872 INFO [StoreOpener-60e652f34d21c8d215e7757e954e6147-1] regionserver.HStore(310): Store=60e652f34d21c8d215e7757e954e6147/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:51,873 INFO [StoreOpener-fe937f95a34599ca17379f2ef7347d80-1] regionserver.HStore(310): Store=fe937f95a34599ca17379f2ef7347d80/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:51,873 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/60e652f34d21c8d215e7757e954e6147 2023-07-23 05:10:51,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/60e652f34d21c8d215e7757e954e6147 2023-07-23 05:10:51,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/fe937f95a34599ca17379f2ef7347d80 2023-07-23 05:10:51,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/fe937f95a34599ca17379f2ef7347d80 2023-07-23 05:10:51,879 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fe937f95a34599ca17379f2ef7347d80 2023-07-23 05:10:51,879 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 60e652f34d21c8d215e7757e954e6147 2023-07-23 05:10:51,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/60e652f34d21c8d215e7757e954e6147/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:10:51,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/fe937f95a34599ca17379f2ef7347d80/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:10:51,885 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fe937f95a34599ca17379f2ef7347d80; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11660193600, jitterRate=0.0859401524066925}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:51,885 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 60e652f34d21c8d215e7757e954e6147; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11568176320, jitterRate=0.07737037539482117}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:51,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fe937f95a34599ca17379f2ef7347d80: 2023-07-23 05:10:51,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 60e652f34d21c8d215e7757e954e6147: 2023-07-23 05:10:51,886 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147., pid=58, masterSystemTime=1690089051862 2023-07-23 05:10:51,887 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80., pid=60, masterSystemTime=1690089051864 2023-07-23 05:10:51,889 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147. 2023-07-23 05:10:51,889 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147. 2023-07-23 05:10:51,889 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f. 2023-07-23 05:10:51,889 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c63615dc0209f4fb87914eb4c514928f, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-23 05:10:51,889 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop c63615dc0209f4fb87914eb4c514928f 2023-07-23 05:10:51,889 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:51,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c63615dc0209f4fb87914eb4c514928f 2023-07-23 05:10:51,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c63615dc0209f4fb87914eb4c514928f 2023-07-23 05:10:51,890 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=55 updating hbase:meta row=60e652f34d21c8d215e7757e954e6147, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:51,890 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089051889"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089051889"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089051889"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089051889"}]},"ts":"1690089051889"} 2023-07-23 05:10:51,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80. 2023-07-23 05:10:51,890 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80. 2023-07-23 05:10:51,891 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908. 2023-07-23 05:10:51,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6d815181d9023f8dc69cd8c24abfe908, NAME => 'Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-23 05:10:51,891 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=fe937f95a34599ca17379f2ef7347d80, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:51,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 6d815181d9023f8dc69cd8c24abfe908 2023-07-23 05:10:51,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:51,891 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089051891"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089051891"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089051891"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089051891"}]},"ts":"1690089051891"} 2023-07-23 05:10:51,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6d815181d9023f8dc69cd8c24abfe908 2023-07-23 05:10:51,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6d815181d9023f8dc69cd8c24abfe908 2023-07-23 05:10:51,892 INFO [StoreOpener-c63615dc0209f4fb87914eb4c514928f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region c63615dc0209f4fb87914eb4c514928f 2023-07-23 05:10:51,895 INFO [StoreOpener-6d815181d9023f8dc69cd8c24abfe908-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 6d815181d9023f8dc69cd8c24abfe908 2023-07-23 05:10:51,896 DEBUG [StoreOpener-c63615dc0209f4fb87914eb4c514928f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/c63615dc0209f4fb87914eb4c514928f/f 2023-07-23 05:10:51,896 DEBUG [StoreOpener-c63615dc0209f4fb87914eb4c514928f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/c63615dc0209f4fb87914eb4c514928f/f 2023-07-23 05:10:51,896 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=58, resume processing ppid=55 2023-07-23 05:10:51,896 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=55, state=SUCCESS; OpenRegionProcedure 60e652f34d21c8d215e7757e954e6147, server=jenkins-hbase4.apache.org,41981,1690089047062 in 185 msec 2023-07-23 05:10:51,897 INFO [StoreOpener-c63615dc0209f4fb87914eb4c514928f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c63615dc0209f4fb87914eb4c514928f columnFamilyName f 2023-07-23 05:10:51,897 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=57 2023-07-23 05:10:51,897 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=57, state=SUCCESS; OpenRegionProcedure fe937f95a34599ca17379f2ef7347d80, server=jenkins-hbase4.apache.org,37441,1690089043078 in 183 msec 2023-07-23 05:10:51,898 DEBUG [StoreOpener-6d815181d9023f8dc69cd8c24abfe908-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/6d815181d9023f8dc69cd8c24abfe908/f 2023-07-23 05:10:51,898 DEBUG [StoreOpener-6d815181d9023f8dc69cd8c24abfe908-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/6d815181d9023f8dc69cd8c24abfe908/f 2023-07-23 05:10:51,898 INFO [StoreOpener-6d815181d9023f8dc69cd8c24abfe908-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6d815181d9023f8dc69cd8c24abfe908 columnFamilyName f 2023-07-23 05:10:51,899 INFO [StoreOpener-c63615dc0209f4fb87914eb4c514928f-1] regionserver.HStore(310): Store=c63615dc0209f4fb87914eb4c514928f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:51,899 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=55, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=60e652f34d21c8d215e7757e954e6147, ASSIGN in 349 msec 2023-07-23 05:10:51,899 INFO [StoreOpener-6d815181d9023f8dc69cd8c24abfe908-1] regionserver.HStore(310): Store=6d815181d9023f8dc69cd8c24abfe908/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:51,899 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fe937f95a34599ca17379f2ef7347d80, ASSIGN in 350 msec 2023-07-23 05:10:51,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/6d815181d9023f8dc69cd8c24abfe908 2023-07-23 05:10:51,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/c63615dc0209f4fb87914eb4c514928f 2023-07-23 05:10:51,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/6d815181d9023f8dc69cd8c24abfe908 2023-07-23 05:10:51,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/c63615dc0209f4fb87914eb4c514928f 2023-07-23 05:10:51,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6d815181d9023f8dc69cd8c24abfe908 2023-07-23 05:10:51,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c63615dc0209f4fb87914eb4c514928f 2023-07-23 05:10:51,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/c63615dc0209f4fb87914eb4c514928f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:10:51,909 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/6d815181d9023f8dc69cd8c24abfe908/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:10:51,910 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c63615dc0209f4fb87914eb4c514928f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10790516640, jitterRate=0.0049451738595962524}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:51,910 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c63615dc0209f4fb87914eb4c514928f: 2023-07-23 05:10:51,910 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6d815181d9023f8dc69cd8c24abfe908; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10557461760, jitterRate=-0.016759753227233887}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:51,910 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6d815181d9023f8dc69cd8c24abfe908: 2023-07-23 05:10:51,911 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f., pid=59, masterSystemTime=1690089051862 2023-07-23 05:10:51,911 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908., pid=62, masterSystemTime=1690089051864 2023-07-23 05:10:51,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f. 2023-07-23 05:10:51,913 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f. 2023-07-23 05:10:51,914 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=c63615dc0209f4fb87914eb4c514928f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:51,914 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089051914"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089051914"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089051914"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089051914"}]},"ts":"1690089051914"} 2023-07-23 05:10:51,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908. 2023-07-23 05:10:51,914 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908. 2023-07-23 05:10:51,914 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d. 2023-07-23 05:10:51,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8360c6a7edacba13a3c66fce15bde27d, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-23 05:10:51,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 8360c6a7edacba13a3c66fce15bde27d 2023-07-23 05:10:51,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:51,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8360c6a7edacba13a3c66fce15bde27d 2023-07-23 05:10:51,915 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8360c6a7edacba13a3c66fce15bde27d 2023-07-23 05:10:51,915 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=53 updating hbase:meta row=6d815181d9023f8dc69cd8c24abfe908, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:51,915 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089051915"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089051915"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089051915"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089051915"}]},"ts":"1690089051915"} 2023-07-23 05:10:51,919 INFO [StoreOpener-8360c6a7edacba13a3c66fce15bde27d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8360c6a7edacba13a3c66fce15bde27d 2023-07-23 05:10:51,922 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=59, resume processing ppid=56 2023-07-23 05:10:51,922 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=56, state=SUCCESS; OpenRegionProcedure c63615dc0209f4fb87914eb4c514928f, server=jenkins-hbase4.apache.org,41981,1690089047062 in 208 msec 2023-07-23 05:10:51,922 DEBUG [StoreOpener-8360c6a7edacba13a3c66fce15bde27d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/8360c6a7edacba13a3c66fce15bde27d/f 2023-07-23 05:10:51,922 DEBUG [StoreOpener-8360c6a7edacba13a3c66fce15bde27d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/8360c6a7edacba13a3c66fce15bde27d/f 2023-07-23 05:10:51,923 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=53 2023-07-23 05:10:51,923 INFO [StoreOpener-8360c6a7edacba13a3c66fce15bde27d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8360c6a7edacba13a3c66fce15bde27d columnFamilyName f 2023-07-23 05:10:51,923 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=53, state=SUCCESS; OpenRegionProcedure 6d815181d9023f8dc69cd8c24abfe908, server=jenkins-hbase4.apache.org,37441,1690089043078 in 206 msec 2023-07-23 05:10:51,924 INFO [StoreOpener-8360c6a7edacba13a3c66fce15bde27d-1] regionserver.HStore(310): Store=8360c6a7edacba13a3c66fce15bde27d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:51,924 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c63615dc0209f4fb87914eb4c514928f, ASSIGN in 375 msec 2023-07-23 05:10:51,925 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/8360c6a7edacba13a3c66fce15bde27d 2023-07-23 05:10:51,926 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6d815181d9023f8dc69cd8c24abfe908, ASSIGN in 379 msec 2023-07-23 05:10:51,926 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/8360c6a7edacba13a3c66fce15bde27d 2023-07-23 05:10:51,929 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8360c6a7edacba13a3c66fce15bde27d 2023-07-23 05:10:51,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/8360c6a7edacba13a3c66fce15bde27d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:10:51,932 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8360c6a7edacba13a3c66fce15bde27d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10434864160, jitterRate=-0.028177544474601746}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:51,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8360c6a7edacba13a3c66fce15bde27d: 2023-07-23 05:10:51,950 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d., pid=61, masterSystemTime=1690089051864 2023-07-23 05:10:51,953 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d. 2023-07-23 05:10:51,953 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d. 2023-07-23 05:10:51,954 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=54 updating hbase:meta row=8360c6a7edacba13a3c66fce15bde27d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:51,954 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089051954"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089051954"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089051954"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089051954"}]},"ts":"1690089051954"} 2023-07-23 05:10:51,960 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=54 2023-07-23 05:10:51,960 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=54, state=SUCCESS; OpenRegionProcedure 8360c6a7edacba13a3c66fce15bde27d, server=jenkins-hbase4.apache.org,37441,1690089043078 in 244 msec 2023-07-23 05:10:51,964 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=52 2023-07-23 05:10:51,964 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=52, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8360c6a7edacba13a3c66fce15bde27d, ASSIGN in 413 msec 2023-07-23 05:10:51,964 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089051964"}]},"ts":"1690089051964"} 2023-07-23 05:10:51,967 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-23 05:10:51,969 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-23 05:10:51,972 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=52, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 725 msec 2023-07-23 05:10:52,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=52 2023-07-23 05:10:52,363 INFO [Listener at localhost/44477] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 52 completed 2023-07-23 05:10:52,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:52,364 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:10:52,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:52,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:10:52,367 INFO [Listener at localhost/44477] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-23 05:10:52,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-23 05:10:52,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=63, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 05:10:52,380 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-23 05:10:52,381 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089052381"}]},"ts":"1690089052381"} 2023-07-23 05:10:52,383 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-23 05:10:52,385 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-23 05:10:52,387 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6d815181d9023f8dc69cd8c24abfe908, UNASSIGN}, {pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8360c6a7edacba13a3c66fce15bde27d, UNASSIGN}, {pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=60e652f34d21c8d215e7757e954e6147, UNASSIGN}, {pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c63615dc0209f4fb87914eb4c514928f, UNASSIGN}, {pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fe937f95a34599ca17379f2ef7347d80, UNASSIGN}] 2023-07-23 05:10:52,390 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=64, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6d815181d9023f8dc69cd8c24abfe908, UNASSIGN 2023-07-23 05:10:52,390 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c63615dc0209f4fb87914eb4c514928f, UNASSIGN 2023-07-23 05:10:52,390 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=65, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8360c6a7edacba13a3c66fce15bde27d, UNASSIGN 2023-07-23 05:10:52,390 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fe937f95a34599ca17379f2ef7347d80, UNASSIGN 2023-07-23 05:10:52,390 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=66, ppid=63, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=60e652f34d21c8d215e7757e954e6147, UNASSIGN 2023-07-23 05:10:52,391 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=c63615dc0209f4fb87914eb4c514928f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:52,392 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=8360c6a7edacba13a3c66fce15bde27d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:52,391 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=6d815181d9023f8dc69cd8c24abfe908, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:52,392 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=60e652f34d21c8d215e7757e954e6147, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:10:52,392 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089052392"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089052392"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089052392"}]},"ts":"1690089052392"} 2023-07-23 05:10:52,392 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089052391"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089052391"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089052391"}]},"ts":"1690089052391"} 2023-07-23 05:10:52,392 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089052391"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089052391"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089052391"}]},"ts":"1690089052391"} 2023-07-23 05:10:52,392 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089052391"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089052391"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089052391"}]},"ts":"1690089052391"} 2023-07-23 05:10:52,392 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=fe937f95a34599ca17379f2ef7347d80, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:52,393 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089052392"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089052392"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089052392"}]},"ts":"1690089052392"} 2023-07-23 05:10:52,400 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=69, ppid=66, state=RUNNABLE; CloseRegionProcedure 60e652f34d21c8d215e7757e954e6147, server=jenkins-hbase4.apache.org,41981,1690089047062}] 2023-07-23 05:10:52,402 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=70, ppid=65, state=RUNNABLE; CloseRegionProcedure 8360c6a7edacba13a3c66fce15bde27d, server=jenkins-hbase4.apache.org,37441,1690089043078}] 2023-07-23 05:10:52,404 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=71, ppid=67, state=RUNNABLE; CloseRegionProcedure c63615dc0209f4fb87914eb4c514928f, server=jenkins-hbase4.apache.org,41981,1690089047062}] 2023-07-23 05:10:52,405 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=64, state=RUNNABLE; CloseRegionProcedure 6d815181d9023f8dc69cd8c24abfe908, server=jenkins-hbase4.apache.org,37441,1690089043078}] 2023-07-23 05:10:52,406 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=68, state=RUNNABLE; CloseRegionProcedure fe937f95a34599ca17379f2ef7347d80, server=jenkins-hbase4.apache.org,37441,1690089043078}] 2023-07-23 05:10:52,482 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-23 05:10:52,553 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 60e652f34d21c8d215e7757e954e6147 2023-07-23 05:10:52,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 60e652f34d21c8d215e7757e954e6147, disabling compactions & flushes 2023-07-23 05:10:52,554 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147. 2023-07-23 05:10:52,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147. 2023-07-23 05:10:52,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147. after waiting 0 ms 2023-07-23 05:10:52,554 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147. 2023-07-23 05:10:52,556 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 6d815181d9023f8dc69cd8c24abfe908 2023-07-23 05:10:52,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6d815181d9023f8dc69cd8c24abfe908, disabling compactions & flushes 2023-07-23 05:10:52,557 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908. 2023-07-23 05:10:52,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908. 2023-07-23 05:10:52,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908. after waiting 0 ms 2023-07-23 05:10:52,557 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908. 2023-07-23 05:10:52,561 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/60e652f34d21c8d215e7757e954e6147/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:10:52,562 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147. 2023-07-23 05:10:52,562 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 60e652f34d21c8d215e7757e954e6147: 2023-07-23 05:10:52,562 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/6d815181d9023f8dc69cd8c24abfe908/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:10:52,563 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908. 2023-07-23 05:10:52,563 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6d815181d9023f8dc69cd8c24abfe908: 2023-07-23 05:10:52,564 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 60e652f34d21c8d215e7757e954e6147 2023-07-23 05:10:52,564 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close c63615dc0209f4fb87914eb4c514928f 2023-07-23 05:10:52,565 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c63615dc0209f4fb87914eb4c514928f, disabling compactions & flushes 2023-07-23 05:10:52,565 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f. 2023-07-23 05:10:52,565 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f. 2023-07-23 05:10:52,565 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f. after waiting 0 ms 2023-07-23 05:10:52,565 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f. 2023-07-23 05:10:52,566 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=66 updating hbase:meta row=60e652f34d21c8d215e7757e954e6147, regionState=CLOSED 2023-07-23 05:10:52,566 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089052566"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089052566"}]},"ts":"1690089052566"} 2023-07-23 05:10:52,567 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 6d815181d9023f8dc69cd8c24abfe908 2023-07-23 05:10:52,567 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fe937f95a34599ca17379f2ef7347d80 2023-07-23 05:10:52,567 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=64 updating hbase:meta row=6d815181d9023f8dc69cd8c24abfe908, regionState=CLOSED 2023-07-23 05:10:52,567 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089052567"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089052567"}]},"ts":"1690089052567"} 2023-07-23 05:10:52,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fe937f95a34599ca17379f2ef7347d80, disabling compactions & flushes 2023-07-23 05:10:52,570 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80. 2023-07-23 05:10:52,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80. 2023-07-23 05:10:52,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80. after waiting 0 ms 2023-07-23 05:10:52,570 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80. 2023-07-23 05:10:52,572 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=69, resume processing ppid=66 2023-07-23 05:10:52,572 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=66, state=SUCCESS; CloseRegionProcedure 60e652f34d21c8d215e7757e954e6147, server=jenkins-hbase4.apache.org,41981,1690089047062 in 168 msec 2023-07-23 05:10:52,574 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=64 2023-07-23 05:10:52,574 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=64, state=SUCCESS; CloseRegionProcedure 6d815181d9023f8dc69cd8c24abfe908, server=jenkins-hbase4.apache.org,37441,1690089043078 in 166 msec 2023-07-23 05:10:52,574 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=66, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=60e652f34d21c8d215e7757e954e6147, UNASSIGN in 185 msec 2023-07-23 05:10:52,576 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=6d815181d9023f8dc69cd8c24abfe908, UNASSIGN in 187 msec 2023-07-23 05:10:52,580 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/c63615dc0209f4fb87914eb4c514928f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:10:52,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/fe937f95a34599ca17379f2ef7347d80/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:10:52,581 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f. 2023-07-23 05:10:52,581 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c63615dc0209f4fb87914eb4c514928f: 2023-07-23 05:10:52,582 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80. 2023-07-23 05:10:52,582 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fe937f95a34599ca17379f2ef7347d80: 2023-07-23 05:10:52,583 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed c63615dc0209f4fb87914eb4c514928f 2023-07-23 05:10:52,584 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=c63615dc0209f4fb87914eb4c514928f, regionState=CLOSED 2023-07-23 05:10:52,584 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089052584"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089052584"}]},"ts":"1690089052584"} 2023-07-23 05:10:52,585 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fe937f95a34599ca17379f2ef7347d80 2023-07-23 05:10:52,585 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8360c6a7edacba13a3c66fce15bde27d 2023-07-23 05:10:52,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8360c6a7edacba13a3c66fce15bde27d, disabling compactions & flushes 2023-07-23 05:10:52,588 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d. 2023-07-23 05:10:52,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d. 2023-07-23 05:10:52,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d. after waiting 0 ms 2023-07-23 05:10:52,588 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d. 2023-07-23 05:10:52,593 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=fe937f95a34599ca17379f2ef7347d80, regionState=CLOSED 2023-07-23 05:10:52,593 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690089052593"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089052593"}]},"ts":"1690089052593"} 2023-07-23 05:10:52,601 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=67 2023-07-23 05:10:52,601 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=68 2023-07-23 05:10:52,602 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=67, state=SUCCESS; CloseRegionProcedure c63615dc0209f4fb87914eb4c514928f, server=jenkins-hbase4.apache.org,41981,1690089047062 in 191 msec 2023-07-23 05:10:52,602 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=68, state=SUCCESS; CloseRegionProcedure fe937f95a34599ca17379f2ef7347d80, server=jenkins-hbase4.apache.org,37441,1690089043078 in 190 msec 2023-07-23 05:10:52,609 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=fe937f95a34599ca17379f2ef7347d80, UNASSIGN in 215 msec 2023-07-23 05:10:52,610 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testTableMoveTruncateAndDrop/8360c6a7edacba13a3c66fce15bde27d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:10:52,610 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=c63615dc0209f4fb87914eb4c514928f, UNASSIGN in 215 msec 2023-07-23 05:10:52,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d. 2023-07-23 05:10:52,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8360c6a7edacba13a3c66fce15bde27d: 2023-07-23 05:10:52,614 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8360c6a7edacba13a3c66fce15bde27d 2023-07-23 05:10:52,615 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=65 updating hbase:meta row=8360c6a7edacba13a3c66fce15bde27d, regionState=CLOSED 2023-07-23 05:10:52,615 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690089052615"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089052615"}]},"ts":"1690089052615"} 2023-07-23 05:10:52,620 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=70, resume processing ppid=65 2023-07-23 05:10:52,621 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=65, state=SUCCESS; CloseRegionProcedure 8360c6a7edacba13a3c66fce15bde27d, server=jenkins-hbase4.apache.org,37441,1690089043078 in 215 msec 2023-07-23 05:10:52,623 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=63 2023-07-23 05:10:52,623 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=63, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=8360c6a7edacba13a3c66fce15bde27d, UNASSIGN in 234 msec 2023-07-23 05:10:52,624 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089052624"}]},"ts":"1690089052624"} 2023-07-23 05:10:52,626 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-23 05:10:52,628 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-23 05:10:52,630 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=63, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 261 msec 2023-07-23 05:10:52,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=63 2023-07-23 05:10:52,684 INFO [Listener at localhost/44477] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 63 completed 2023-07-23 05:10:52,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-23 05:10:52,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 05:10:52,703 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=74, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 05:10:52,703 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_595606539' 2023-07-23 05:10:52,704 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=74, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 05:10:52,706 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:52,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:52,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:52,709 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:10:52,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-23 05:10:52,720 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6d815181d9023f8dc69cd8c24abfe908 2023-07-23 05:10:52,720 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fe937f95a34599ca17379f2ef7347d80 2023-07-23 05:10:52,720 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c63615dc0209f4fb87914eb4c514928f 2023-07-23 05:10:52,720 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/60e652f34d21c8d215e7757e954e6147 2023-07-23 05:10:52,720 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8360c6a7edacba13a3c66fce15bde27d 2023-07-23 05:10:52,725 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6d815181d9023f8dc69cd8c24abfe908/f, FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6d815181d9023f8dc69cd8c24abfe908/recovered.edits] 2023-07-23 05:10:52,725 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fe937f95a34599ca17379f2ef7347d80/f, FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fe937f95a34599ca17379f2ef7347d80/recovered.edits] 2023-07-23 05:10:52,725 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/60e652f34d21c8d215e7757e954e6147/f, FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/60e652f34d21c8d215e7757e954e6147/recovered.edits] 2023-07-23 05:10:52,725 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8360c6a7edacba13a3c66fce15bde27d/f, FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8360c6a7edacba13a3c66fce15bde27d/recovered.edits] 2023-07-23 05:10:52,725 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c63615dc0209f4fb87914eb4c514928f/f, FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c63615dc0209f4fb87914eb4c514928f/recovered.edits] 2023-07-23 05:10:52,743 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fe937f95a34599ca17379f2ef7347d80/recovered.edits/4.seqid to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/archive/data/default/Group_testTableMoveTruncateAndDrop/fe937f95a34599ca17379f2ef7347d80/recovered.edits/4.seqid 2023-07-23 05:10:52,744 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6d815181d9023f8dc69cd8c24abfe908/recovered.edits/4.seqid to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/archive/data/default/Group_testTableMoveTruncateAndDrop/6d815181d9023f8dc69cd8c24abfe908/recovered.edits/4.seqid 2023-07-23 05:10:52,744 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c63615dc0209f4fb87914eb4c514928f/recovered.edits/4.seqid to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/archive/data/default/Group_testTableMoveTruncateAndDrop/c63615dc0209f4fb87914eb4c514928f/recovered.edits/4.seqid 2023-07-23 05:10:52,745 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/fe937f95a34599ca17379f2ef7347d80 2023-07-23 05:10:52,745 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/6d815181d9023f8dc69cd8c24abfe908 2023-07-23 05:10:52,745 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/c63615dc0209f4fb87914eb4c514928f 2023-07-23 05:10:52,746 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8360c6a7edacba13a3c66fce15bde27d/recovered.edits/4.seqid to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/archive/data/default/Group_testTableMoveTruncateAndDrop/8360c6a7edacba13a3c66fce15bde27d/recovered.edits/4.seqid 2023-07-23 05:10:52,746 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/60e652f34d21c8d215e7757e954e6147/recovered.edits/4.seqid to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/archive/data/default/Group_testTableMoveTruncateAndDrop/60e652f34d21c8d215e7757e954e6147/recovered.edits/4.seqid 2023-07-23 05:10:52,746 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/8360c6a7edacba13a3c66fce15bde27d 2023-07-23 05:10:52,747 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testTableMoveTruncateAndDrop/60e652f34d21c8d215e7757e954e6147 2023-07-23 05:10:52,747 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-23 05:10:52,751 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=74, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 05:10:52,758 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-23 05:10:52,761 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-23 05:10:52,763 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=74, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 05:10:52,763 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-23 05:10:52,763 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089052763"}]},"ts":"9223372036854775807"} 2023-07-23 05:10:52,763 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089052763"}]},"ts":"9223372036854775807"} 2023-07-23 05:10:52,763 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089052763"}]},"ts":"9223372036854775807"} 2023-07-23 05:10:52,764 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089052763"}]},"ts":"9223372036854775807"} 2023-07-23 05:10:52,764 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089052763"}]},"ts":"9223372036854775807"} 2023-07-23 05:10:52,775 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-23 05:10:52,775 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 6d815181d9023f8dc69cd8c24abfe908, NAME => 'Group_testTableMoveTruncateAndDrop,,1690089051335.6d815181d9023f8dc69cd8c24abfe908.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 8360c6a7edacba13a3c66fce15bde27d, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690089051335.8360c6a7edacba13a3c66fce15bde27d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 60e652f34d21c8d215e7757e954e6147, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690089051335.60e652f34d21c8d215e7757e954e6147.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => c63615dc0209f4fb87914eb4c514928f, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690089051335.c63615dc0209f4fb87914eb4c514928f.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => fe937f95a34599ca17379f2ef7347d80, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690089051335.fe937f95a34599ca17379f2ef7347d80.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-23 05:10:52,775 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-23 05:10:52,775 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690089052775"}]},"ts":"9223372036854775807"} 2023-07-23 05:10:52,778 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-23 05:10:52,782 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=74, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-23 05:10:52,785 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=74, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 90 msec 2023-07-23 05:10:52,820 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=74 2023-07-23 05:10:52,820 INFO [Listener at localhost/44477] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 74 completed 2023-07-23 05:10:52,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:52,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:10:52,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:52,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:52,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:10:52,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:10:52,830 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:10:52,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981] to rsgroup default 2023-07-23 05:10:52,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:52,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:52,837 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:52,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:10:52,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_595606539, current retry=0 2023-07-23 05:10:52,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1690089043078, jenkins-hbase4.apache.org,41981,1690089047062] are moved back to Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:52,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_595606539 => default 2023-07-23 05:10:52,847 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:10:52,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_595606539 2023-07-23 05:10:52,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:52,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:52,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 05:10:52,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:10:52,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:10:52,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:10:52,867 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:10:52,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:10:52,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:10:52,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:10:52,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:52,875 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:10:52,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:10:52,883 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:10:52,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:10:52,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:52,887 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:52,889 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:10:52,891 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:10:52,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:52,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:52,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37433] to rsgroup master 2023-07-23 05:10:52,897 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:10:52,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] ipc.CallRunner(144): callId: 147 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39966 deadline: 1690090252897, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. 2023-07-23 05:10:52,898 WARN [Listener at localhost/44477] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:10:52,901 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:10:52,901 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:52,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:52,902 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981, jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:46173], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:10:52,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:10:52,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:10:52,927 INFO [Listener at localhost/44477] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=502 (was 423) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=41981 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:36893 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1430322893-172.31.14.131-1690089037211:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1347195116_17 at /127.0.0.1:56550 [Receiving block BP-1430322893-172.31.14.131-1690089037211:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1698302169-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=41981 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1698302169-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63392@0x5a8d6a49-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63392@0x5a8d6a49 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/792938361.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1698302169-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=41981 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1430322893-172.31.14.131-1690089037211:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-596213111_17 at /127.0.0.1:34686 [Receiving block BP-1430322893-172.31.14.131-1690089037211:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1388525819_17 at /127.0.0.1:42954 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1430322893-172.31.14.131-1690089037211:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=41981 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=41981 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04-prefix:jenkins-hbase4.apache.org,46173,1690089043304.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x42e89c26-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1698302169-636 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1057184077.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:41981 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1698302169-638 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x42e89c26-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1430322893-172.31.14.131-1690089037211:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-40da85fa-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1430322893-172.31.14.131-1690089037211:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=41981 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x42e89c26-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-596213111_17 at /127.0.0.1:56588 [Receiving block BP-1430322893-172.31.14.131-1690089037211:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-596213111_17 at /127.0.0.1:42970 [Receiving block BP-1430322893-172.31.14.131-1690089037211:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=41981 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x42e89c26-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=41981 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1698302169-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2db71259-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1347195116_17 at /127.0.0.1:34670 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1698302169-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1073590046) connection to localhost/127.0.0.1:36893 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:63392@0x5a8d6a49-SendThread(127.0.0.1:63392) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1698302169-637-acceptor-0@438ddd76-ServerConnector@5a5d92bc{HTTP/1.1, (http/1.1)}{0.0.0.0:38153} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2db71259-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:41981-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1347195116_17 at /127.0.0.1:34638 [Receiving block BP-1430322893-172.31.14.131-1690089037211:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41981 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04-prefix:jenkins-hbase4.apache.org,41981,1690089047062 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x42e89c26-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1347195116_17 at /127.0.0.1:42956 [Receiving block BP-1430322893-172.31.14.131-1690089037211:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41981 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x42e89c26-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1347195116_17 at /127.0.0.1:56592 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:41981Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1430322893-172.31.14.131-1690089037211:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=814 (was 684) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=484 (was 475) - SystemLoadAverage LEAK? -, ProcessCount=177 (was 177), AvailableMemoryMB=6734 (was 6965) 2023-07-23 05:10:52,929 WARN [Listener at localhost/44477] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-23 05:10:52,948 INFO [Listener at localhost/44477] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=502, OpenFileDescriptor=814, MaxFileDescriptor=60000, SystemLoadAverage=484, ProcessCount=177, AvailableMemoryMB=6733 2023-07-23 05:10:52,948 WARN [Listener at localhost/44477] hbase.ResourceChecker(130): Thread=502 is superior to 500 2023-07-23 05:10:52,949 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-23 05:10:52,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:52,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:52,956 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:10:52,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:10:52,957 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:10:52,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:10:52,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:10:52,959 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:10:52,963 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:52,964 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:10:52,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:10:52,969 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:10:52,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:10:52,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:52,973 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:52,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:10:52,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:10:52,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:52,980 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:52,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37433] to rsgroup master 2023-07-23 05:10:52,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:10:52,983 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] ipc.CallRunner(144): callId: 175 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39966 deadline: 1690090252982, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. 2023-07-23 05:10:52,983 WARN [Listener at localhost/44477] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:10:52,985 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:10:52,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:52,986 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:52,986 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981, jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:46173], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:10:52,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:10:52,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:10:52,988 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-23 05:10:52,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:10:52,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] ipc.CallRunner(144): callId: 181 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:39966 deadline: 1690090252988, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-23 05:10:52,990 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-23 05:10:52,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:10:52,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] ipc.CallRunner(144): callId: 183 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:39966 deadline: 1690090252989, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-23 05:10:52,991 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-23 05:10:52,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:10:52,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:39966 deadline: 1690090252991, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-23 05:10:52,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-23 05:10:52,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-23 05:10:53,000 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:53,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:53,001 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:10:53,004 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:10:53,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:53,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:53,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:53,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:53,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:10:53,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:10:53,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:10:53,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:10:53,027 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:10:53,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-23 05:10:53,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:53,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:53,034 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 05:10:53,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:10:53,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:10:53,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:10:53,040 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:10:53,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:10:53,041 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:10:53,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:10:53,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:53,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:10:53,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:10:53,068 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:10:53,069 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:10:53,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:53,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:53,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:10:53,077 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:10:53,102 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:53,102 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:53,105 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37433] to rsgroup master 2023-07-23 05:10:53,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:10:53,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 219 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39966 deadline: 1690090253105, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. 2023-07-23 05:10:53,106 WARN [Listener at localhost/44477] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:10:53,108 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:10:53,109 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:53,109 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:53,110 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981, jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:46173], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:10:53,111 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:10:53,111 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:10:53,136 INFO [Listener at localhost/44477] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=505 (was 502) Potentially hanging thread: hconnection-0x2db71259-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2db71259-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2db71259-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=814 (was 814), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=484 (was 484), ProcessCount=177 (was 177), AvailableMemoryMB=6676 (was 6733) 2023-07-23 05:10:53,136 WARN [Listener at localhost/44477] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-23 05:10:53,172 INFO [Listener at localhost/44477] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=505, OpenFileDescriptor=816, MaxFileDescriptor=60000, SystemLoadAverage=484, ProcessCount=177, AvailableMemoryMB=6673 2023-07-23 05:10:53,173 WARN [Listener at localhost/44477] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-23 05:10:53,173 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-23 05:10:53,180 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:53,181 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:53,182 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:10:53,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:10:53,182 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:10:53,183 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:10:53,183 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:10:53,185 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:10:53,190 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:53,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:10:53,192 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:10:53,197 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:10:53,198 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:10:53,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:53,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:53,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:10:53,206 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:10:53,229 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:53,229 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:53,237 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37433] to rsgroup master 2023-07-23 05:10:53,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:10:53,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 247 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39966 deadline: 1690090253237, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. 2023-07-23 05:10:53,238 WARN [Listener at localhost/44477] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:10:53,240 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:10:53,240 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:53,241 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:53,241 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981, jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:46173], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:10:53,242 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:10:53,242 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:10:53,243 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:53,244 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:53,245 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:10:53,245 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:10:53,246 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-23 05:10:53,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:53,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-23 05:10:53,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:53,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:10:53,254 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:10:53,257 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:53,257 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:53,260 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981] to rsgroup bar 2023-07-23 05:10:53,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:53,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-23 05:10:53,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:53,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:10:53,266 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(238): Moving server region a6558fab23b07212eec6b6a195311310, which do not belong to RSGroup bar 2023-07-23 05:10:53,267 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=a6558fab23b07212eec6b6a195311310, REOPEN/MOVE 2023-07-23 05:10:53,267 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-23 05:10:53,269 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=a6558fab23b07212eec6b6a195311310, REOPEN/MOVE 2023-07-23 05:10:53,270 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=a6558fab23b07212eec6b6a195311310, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:53,270 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690089053270"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089053270"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089053270"}]},"ts":"1690089053270"} 2023-07-23 05:10:53,272 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=75, state=RUNNABLE; CloseRegionProcedure a6558fab23b07212eec6b6a195311310, server=jenkins-hbase4.apache.org,45681,1690089042835}] 2023-07-23 05:10:53,425 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a6558fab23b07212eec6b6a195311310 2023-07-23 05:10:53,426 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a6558fab23b07212eec6b6a195311310, disabling compactions & flushes 2023-07-23 05:10:53,426 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. 2023-07-23 05:10:53,427 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. 2023-07-23 05:10:53,427 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. after waiting 0 ms 2023-07-23 05:10:53,427 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. 2023-07-23 05:10:53,427 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing a6558fab23b07212eec6b6a195311310 1/1 column families, dataSize=6.36 KB heapSize=10.50 KB 2023-07-23 05:10:53,499 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.36 KB at sequenceid=26 (bloomFilter=true), to=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310/.tmp/m/369025ca47484ec1a6646d65ca38ae79 2023-07-23 05:10:53,507 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 369025ca47484ec1a6646d65ca38ae79 2023-07-23 05:10:53,508 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310/.tmp/m/369025ca47484ec1a6646d65ca38ae79 as hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310/m/369025ca47484ec1a6646d65ca38ae79 2023-07-23 05:10:53,515 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 369025ca47484ec1a6646d65ca38ae79 2023-07-23 05:10:53,515 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310/m/369025ca47484ec1a6646d65ca38ae79, entries=9, sequenceid=26, filesize=5.5 K 2023-07-23 05:10:53,517 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.36 KB/6514, heapSize ~10.48 KB/10736, currentSize=0 B/0 for a6558fab23b07212eec6b6a195311310 in 90ms, sequenceid=26, compaction requested=false 2023-07-23 05:10:53,527 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-23 05:10:53,528 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 05:10:53,529 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. 2023-07-23 05:10:53,529 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a6558fab23b07212eec6b6a195311310: 2023-07-23 05:10:53,529 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding a6558fab23b07212eec6b6a195311310 move to jenkins-hbase4.apache.org,46173,1690089043304 record at close sequenceid=26 2023-07-23 05:10:53,531 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a6558fab23b07212eec6b6a195311310 2023-07-23 05:10:53,531 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=a6558fab23b07212eec6b6a195311310, regionState=CLOSED 2023-07-23 05:10:53,531 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690089053531"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089053531"}]},"ts":"1690089053531"} 2023-07-23 05:10:53,535 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=75 2023-07-23 05:10:53,535 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=75, state=SUCCESS; CloseRegionProcedure a6558fab23b07212eec6b6a195311310, server=jenkins-hbase4.apache.org,45681,1690089042835 in 261 msec 2023-07-23 05:10:53,536 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=75, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=a6558fab23b07212eec6b6a195311310, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46173,1690089043304; forceNewPlan=false, retain=false 2023-07-23 05:10:53,687 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=a6558fab23b07212eec6b6a195311310, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:53,687 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690089053687"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089053687"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089053687"}]},"ts":"1690089053687"} 2023-07-23 05:10:53,692 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=77, ppid=75, state=RUNNABLE; OpenRegionProcedure a6558fab23b07212eec6b6a195311310, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:10:53,848 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. 2023-07-23 05:10:53,848 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a6558fab23b07212eec6b6a195311310, NAME => 'hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:10:53,848 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 05:10:53,848 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. service=MultiRowMutationService 2023-07-23 05:10:53,848 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-23 05:10:53,848 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup a6558fab23b07212eec6b6a195311310 2023-07-23 05:10:53,848 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:53,848 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a6558fab23b07212eec6b6a195311310 2023-07-23 05:10:53,848 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a6558fab23b07212eec6b6a195311310 2023-07-23 05:10:53,850 INFO [StoreOpener-a6558fab23b07212eec6b6a195311310-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region a6558fab23b07212eec6b6a195311310 2023-07-23 05:10:53,851 DEBUG [StoreOpener-a6558fab23b07212eec6b6a195311310-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310/m 2023-07-23 05:10:53,851 DEBUG [StoreOpener-a6558fab23b07212eec6b6a195311310-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310/m 2023-07-23 05:10:53,852 INFO [StoreOpener-a6558fab23b07212eec6b6a195311310-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a6558fab23b07212eec6b6a195311310 columnFamilyName m 2023-07-23 05:10:53,859 INFO [StoreFileOpener-m-1] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 369025ca47484ec1a6646d65ca38ae79 2023-07-23 05:10:53,859 DEBUG [StoreOpener-a6558fab23b07212eec6b6a195311310-1] regionserver.HStore(539): loaded hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310/m/369025ca47484ec1a6646d65ca38ae79 2023-07-23 05:10:53,860 INFO [StoreOpener-a6558fab23b07212eec6b6a195311310-1] regionserver.HStore(310): Store=a6558fab23b07212eec6b6a195311310/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:53,861 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310 2023-07-23 05:10:53,862 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310 2023-07-23 05:10:53,865 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a6558fab23b07212eec6b6a195311310 2023-07-23 05:10:53,866 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a6558fab23b07212eec6b6a195311310; next sequenceid=30; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@664ad776, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:53,866 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a6558fab23b07212eec6b6a195311310: 2023-07-23 05:10:53,867 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310., pid=77, masterSystemTime=1690089053844 2023-07-23 05:10:53,868 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. 2023-07-23 05:10:53,868 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. 2023-07-23 05:10:53,869 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=75 updating hbase:meta row=a6558fab23b07212eec6b6a195311310, regionState=OPEN, openSeqNum=30, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:53,869 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690089053869"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089053869"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089053869"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089053869"}]},"ts":"1690089053869"} 2023-07-23 05:10:53,872 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=77, resume processing ppid=75 2023-07-23 05:10:53,872 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=77, ppid=75, state=SUCCESS; OpenRegionProcedure a6558fab23b07212eec6b6a195311310, server=jenkins-hbase4.apache.org,46173,1690089043304 in 182 msec 2023-07-23 05:10:53,874 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=75, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=a6558fab23b07212eec6b6a195311310, REOPEN/MOVE in 606 msec 2023-07-23 05:10:54,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure.ProcedureSyncWait(216): waitFor pid=75 2023-07-23 05:10:54,269 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1690089043078, jenkins-hbase4.apache.org,41981,1690089047062, jenkins-hbase4.apache.org,45681,1690089042835] are moved back to default 2023-07-23 05:10:54,269 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-23 05:10:54,269 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:10:54,271 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=45681] ipc.CallRunner(144): callId: 11 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:41044 deadline: 1690089114270, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46173 startCode=1690089043304. As of locationSeqNum=26. 2023-07-23 05:10:54,374 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37441] ipc.CallRunner(144): callId: 12 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:44320 deadline: 1690089114374, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46173 startCode=1690089043304. As of locationSeqNum=14. 2023-07-23 05:10:54,477 DEBUG [hconnection-0x2db71259-shared-pool-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 05:10:54,479 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60108, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 05:10:54,491 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:54,492 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:54,495 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-23 05:10:54,495 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:10:54,497 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 05:10:54,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-23 05:10:54,500 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 05:10:54,501 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 78 2023-07-23 05:10:54,501 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45681] ipc.CallRunner(144): callId: 183 service: ClientService methodName: ExecService size: 528 connection: 172.31.14.131:41032 deadline: 1690089114501, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=46173 startCode=1690089043304. As of locationSeqNum=26. 2023-07-23 05:10:54,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-23 05:10:54,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-23 05:10:54,608 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:54,609 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-23 05:10:54,609 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:54,610 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:10:54,613 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 05:10:54,615 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:54,616 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03 empty. 2023-07-23 05:10:54,617 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:54,617 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-23 05:10:54,645 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-23 05:10:54,650 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => a3320459f0e18e6a2df412cc7c24ce03, NAME => 'Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:10:54,672 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:54,672 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing a3320459f0e18e6a2df412cc7c24ce03, disabling compactions & flushes 2023-07-23 05:10:54,672 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:54,672 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:54,672 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. after waiting 0 ms 2023-07-23 05:10:54,673 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:54,673 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:54,673 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for a3320459f0e18e6a2df412cc7c24ce03: 2023-07-23 05:10:54,676 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 05:10:54,677 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690089054677"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089054677"}]},"ts":"1690089054677"} 2023-07-23 05:10:54,679 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 05:10:54,680 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 05:10:54,680 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089054680"}]},"ts":"1690089054680"} 2023-07-23 05:10:54,682 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-23 05:10:54,691 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a3320459f0e18e6a2df412cc7c24ce03, ASSIGN}] 2023-07-23 05:10:54,694 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a3320459f0e18e6a2df412cc7c24ce03, ASSIGN 2023-07-23 05:10:54,695 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=79, ppid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a3320459f0e18e6a2df412cc7c24ce03, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46173,1690089043304; forceNewPlan=false, retain=false 2023-07-23 05:10:54,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-23 05:10:54,846 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=a3320459f0e18e6a2df412cc7c24ce03, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:54,847 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690089054846"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089054846"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089054846"}]},"ts":"1690089054846"} 2023-07-23 05:10:54,848 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=79, state=RUNNABLE; OpenRegionProcedure a3320459f0e18e6a2df412cc7c24ce03, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:10:55,004 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:55,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a3320459f0e18e6a2df412cc7c24ce03, NAME => 'Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:10:55,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:55,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:55,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:55,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:55,007 INFO [StoreOpener-a3320459f0e18e6a2df412cc7c24ce03-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:55,009 DEBUG [StoreOpener-a3320459f0e18e6a2df412cc7c24ce03-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03/f 2023-07-23 05:10:55,009 DEBUG [StoreOpener-a3320459f0e18e6a2df412cc7c24ce03-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03/f 2023-07-23 05:10:55,009 INFO [StoreOpener-a3320459f0e18e6a2df412cc7c24ce03-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a3320459f0e18e6a2df412cc7c24ce03 columnFamilyName f 2023-07-23 05:10:55,010 INFO [StoreOpener-a3320459f0e18e6a2df412cc7c24ce03-1] regionserver.HStore(310): Store=a3320459f0e18e6a2df412cc7c24ce03/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:55,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:55,013 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:55,017 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:55,020 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:10:55,021 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a3320459f0e18e6a2df412cc7c24ce03; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10452916480, jitterRate=-0.026496291160583496}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:55,021 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a3320459f0e18e6a2df412cc7c24ce03: 2023-07-23 05:10:55,023 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03., pid=80, masterSystemTime=1690089055000 2023-07-23 05:10:55,025 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:55,025 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:55,026 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=79 updating hbase:meta row=a3320459f0e18e6a2df412cc7c24ce03, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:55,026 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690089055026"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089055026"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089055026"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089055026"}]},"ts":"1690089055026"} 2023-07-23 05:10:55,031 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=79 2023-07-23 05:10:55,032 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=79, state=SUCCESS; OpenRegionProcedure a3320459f0e18e6a2df412cc7c24ce03, server=jenkins-hbase4.apache.org,46173,1690089043304 in 181 msec 2023-07-23 05:10:55,036 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-23 05:10:55,036 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a3320459f0e18e6a2df412cc7c24ce03, ASSIGN in 341 msec 2023-07-23 05:10:55,037 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 05:10:55,037 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089055037"}]},"ts":"1690089055037"} 2023-07-23 05:10:55,039 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-23 05:10:55,042 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=78, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 05:10:55,044 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 545 msec 2023-07-23 05:10:55,108 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=78 2023-07-23 05:10:55,109 INFO [Listener at localhost/44477] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 78 completed 2023-07-23 05:10:55,109 DEBUG [Listener at localhost/44477] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-23 05:10:55,109 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:10:55,116 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-23 05:10:55,117 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:10:55,117 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-23 05:10:55,120 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-23 05:10:55,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:55,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-23 05:10:55,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:55,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:10:55,129 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-23 05:10:55,129 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(345): Moving region a3320459f0e18e6a2df412cc7c24ce03 to RSGroup bar 2023-07-23 05:10:55,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:10:55,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:10:55,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:10:55,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:10:55,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-23 05:10:55,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:10:55,130 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a3320459f0e18e6a2df412cc7c24ce03, REOPEN/MOVE 2023-07-23 05:10:55,130 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-23 05:10:55,132 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a3320459f0e18e6a2df412cc7c24ce03, REOPEN/MOVE 2023-07-23 05:10:55,133 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=a3320459f0e18e6a2df412cc7c24ce03, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:55,134 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690089055133"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089055133"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089055133"}]},"ts":"1690089055133"} 2023-07-23 05:10:55,137 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE; CloseRegionProcedure a3320459f0e18e6a2df412cc7c24ce03, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:10:55,297 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:55,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a3320459f0e18e6a2df412cc7c24ce03, disabling compactions & flushes 2023-07-23 05:10:55,299 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:55,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:55,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. after waiting 0 ms 2023-07-23 05:10:55,299 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:55,303 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:10:55,303 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:55,304 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a3320459f0e18e6a2df412cc7c24ce03: 2023-07-23 05:10:55,304 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding a3320459f0e18e6a2df412cc7c24ce03 move to jenkins-hbase4.apache.org,45681,1690089042835 record at close sequenceid=2 2023-07-23 05:10:55,305 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:55,306 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=a3320459f0e18e6a2df412cc7c24ce03, regionState=CLOSED 2023-07-23 05:10:55,306 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690089055306"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089055306"}]},"ts":"1690089055306"} 2023-07-23 05:10:55,309 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-23 05:10:55,309 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; CloseRegionProcedure a3320459f0e18e6a2df412cc7c24ce03, server=jenkins-hbase4.apache.org,46173,1690089043304 in 170 msec 2023-07-23 05:10:55,310 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a3320459f0e18e6a2df412cc7c24ce03, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45681,1690089042835; forceNewPlan=false, retain=false 2023-07-23 05:10:55,460 INFO [jenkins-hbase4:37433] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 05:10:55,460 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=a3320459f0e18e6a2df412cc7c24ce03, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:55,461 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690089055460"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089055460"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089055460"}]},"ts":"1690089055460"} 2023-07-23 05:10:55,462 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=81, state=RUNNABLE; OpenRegionProcedure a3320459f0e18e6a2df412cc7c24ce03, server=jenkins-hbase4.apache.org,45681,1690089042835}] 2023-07-23 05:10:55,620 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:55,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a3320459f0e18e6a2df412cc7c24ce03, NAME => 'Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:10:55,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:55,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:55,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:55,621 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:55,623 INFO [StoreOpener-a3320459f0e18e6a2df412cc7c24ce03-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:55,624 DEBUG [StoreOpener-a3320459f0e18e6a2df412cc7c24ce03-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03/f 2023-07-23 05:10:55,624 DEBUG [StoreOpener-a3320459f0e18e6a2df412cc7c24ce03-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03/f 2023-07-23 05:10:55,625 INFO [StoreOpener-a3320459f0e18e6a2df412cc7c24ce03-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a3320459f0e18e6a2df412cc7c24ce03 columnFamilyName f 2023-07-23 05:10:55,626 INFO [StoreOpener-a3320459f0e18e6a2df412cc7c24ce03-1] regionserver.HStore(310): Store=a3320459f0e18e6a2df412cc7c24ce03/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:55,627 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:55,628 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:55,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:55,637 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a3320459f0e18e6a2df412cc7c24ce03; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9529122720, jitterRate=-0.11253128945827484}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:55,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a3320459f0e18e6a2df412cc7c24ce03: 2023-07-23 05:10:55,639 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03., pid=83, masterSystemTime=1690089055614 2023-07-23 05:10:55,641 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:55,641 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:55,641 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=81 updating hbase:meta row=a3320459f0e18e6a2df412cc7c24ce03, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:55,642 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690089055641"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089055641"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089055641"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089055641"}]},"ts":"1690089055641"} 2023-07-23 05:10:55,647 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=81 2023-07-23 05:10:55,647 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=81, state=SUCCESS; OpenRegionProcedure a3320459f0e18e6a2df412cc7c24ce03, server=jenkins-hbase4.apache.org,45681,1690089042835 in 182 msec 2023-07-23 05:10:55,666 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a3320459f0e18e6a2df412cc7c24ce03, REOPEN/MOVE in 533 msec 2023-07-23 05:10:56,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure.ProcedureSyncWait(216): waitFor pid=81 2023-07-23 05:10:56,132 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-23 05:10:56,132 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:10:56,137 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:56,137 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:56,140 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-23 05:10:56,140 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:10:56,141 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-23 05:10:56,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:10:56,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 285 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:39966 deadline: 1690090256141, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-23 05:10:56,142 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981] to rsgroup default 2023-07-23 05:10:56,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:10:56,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:39966 deadline: 1690090256142, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-23 05:10:56,145 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-23 05:10:56,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:56,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-23 05:10:56,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:56,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:10:56,150 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-23 05:10:56,150 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(345): Moving region a3320459f0e18e6a2df412cc7c24ce03 to RSGroup default 2023-07-23 05:10:56,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a3320459f0e18e6a2df412cc7c24ce03, REOPEN/MOVE 2023-07-23 05:10:56,151 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-23 05:10:56,153 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a3320459f0e18e6a2df412cc7c24ce03, REOPEN/MOVE 2023-07-23 05:10:56,154 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=a3320459f0e18e6a2df412cc7c24ce03, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:56,154 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690089056154"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089056154"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089056154"}]},"ts":"1690089056154"} 2023-07-23 05:10:56,155 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure a3320459f0e18e6a2df412cc7c24ce03, server=jenkins-hbase4.apache.org,45681,1690089042835}] 2023-07-23 05:10:56,310 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:56,311 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a3320459f0e18e6a2df412cc7c24ce03, disabling compactions & flushes 2023-07-23 05:10:56,312 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:56,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:56,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. after waiting 0 ms 2023-07-23 05:10:56,312 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:56,323 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 05:10:56,324 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:56,324 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a3320459f0e18e6a2df412cc7c24ce03: 2023-07-23 05:10:56,324 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding a3320459f0e18e6a2df412cc7c24ce03 move to jenkins-hbase4.apache.org,46173,1690089043304 record at close sequenceid=5 2023-07-23 05:10:56,328 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:56,329 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=a3320459f0e18e6a2df412cc7c24ce03, regionState=CLOSED 2023-07-23 05:10:56,329 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690089056328"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089056328"}]},"ts":"1690089056328"} 2023-07-23 05:10:56,338 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-23 05:10:56,338 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure a3320459f0e18e6a2df412cc7c24ce03, server=jenkins-hbase4.apache.org,45681,1690089042835 in 176 msec 2023-07-23 05:10:56,339 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a3320459f0e18e6a2df412cc7c24ce03, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46173,1690089043304; forceNewPlan=false, retain=false 2023-07-23 05:10:56,490 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=a3320459f0e18e6a2df412cc7c24ce03, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:56,490 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690089056490"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089056490"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089056490"}]},"ts":"1690089056490"} 2023-07-23 05:10:56,493 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure a3320459f0e18e6a2df412cc7c24ce03, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:10:56,626 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-23 05:10:56,651 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:56,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a3320459f0e18e6a2df412cc7c24ce03, NAME => 'Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:10:56,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:56,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:56,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:56,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:56,655 INFO [StoreOpener-a3320459f0e18e6a2df412cc7c24ce03-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:56,656 DEBUG [StoreOpener-a3320459f0e18e6a2df412cc7c24ce03-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03/f 2023-07-23 05:10:56,656 DEBUG [StoreOpener-a3320459f0e18e6a2df412cc7c24ce03-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03/f 2023-07-23 05:10:56,657 INFO [StoreOpener-a3320459f0e18e6a2df412cc7c24ce03-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a3320459f0e18e6a2df412cc7c24ce03 columnFamilyName f 2023-07-23 05:10:56,660 INFO [StoreOpener-a3320459f0e18e6a2df412cc7c24ce03-1] regionserver.HStore(310): Store=a3320459f0e18e6a2df412cc7c24ce03/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:56,661 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:56,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:56,669 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:56,670 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a3320459f0e18e6a2df412cc7c24ce03; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11598338720, jitterRate=0.08017946779727936}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:56,670 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a3320459f0e18e6a2df412cc7c24ce03: 2023-07-23 05:10:56,672 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03., pid=86, masterSystemTime=1690089056645 2023-07-23 05:10:56,674 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:56,674 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:56,675 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=a3320459f0e18e6a2df412cc7c24ce03, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:56,675 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690089056675"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089056675"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089056675"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089056675"}]},"ts":"1690089056675"} 2023-07-23 05:10:56,681 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-23 05:10:56,682 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure a3320459f0e18e6a2df412cc7c24ce03, server=jenkins-hbase4.apache.org,46173,1690089043304 in 184 msec 2023-07-23 05:10:56,696 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a3320459f0e18e6a2df412cc7c24ce03, REOPEN/MOVE in 532 msec 2023-07-23 05:10:57,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-23 05:10:57,153 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-23 05:10:57,153 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:10:57,157 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:57,157 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:57,160 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-23 05:10:57,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:10:57,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 294 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:39966 deadline: 1690090257160, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-23 05:10:57,161 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981] to rsgroup default 2023-07-23 05:10:57,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:57,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-23 05:10:57,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:57,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:10:57,167 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-23 05:10:57,167 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1690089043078, jenkins-hbase4.apache.org,41981,1690089047062, jenkins-hbase4.apache.org,45681,1690089042835] are moved back to bar 2023-07-23 05:10:57,167 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-23 05:10:57,167 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:10:57,170 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:57,170 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:57,173 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-23 05:10:57,176 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:57,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:57,177 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 05:10:57,180 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:10:57,183 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:57,183 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:57,185 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:57,185 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:57,187 INFO [Listener at localhost/44477] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-23 05:10:57,187 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-23 05:10:57,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-23 05:10:57,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-23 05:10:57,191 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089057191"}]},"ts":"1690089057191"} 2023-07-23 05:10:57,192 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-23 05:10:57,194 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-23 05:10:57,195 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a3320459f0e18e6a2df412cc7c24ce03, UNASSIGN}] 2023-07-23 05:10:57,197 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=88, ppid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a3320459f0e18e6a2df412cc7c24ce03, UNASSIGN 2023-07-23 05:10:57,197 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=88 updating hbase:meta row=a3320459f0e18e6a2df412cc7c24ce03, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:57,197 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690089057197"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089057197"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089057197"}]},"ts":"1690089057197"} 2023-07-23 05:10:57,201 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=88, state=RUNNABLE; CloseRegionProcedure a3320459f0e18e6a2df412cc7c24ce03, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:10:57,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-23 05:10:57,353 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:57,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a3320459f0e18e6a2df412cc7c24ce03, disabling compactions & flushes 2023-07-23 05:10:57,357 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:57,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:57,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. after waiting 0 ms 2023-07-23 05:10:57,357 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:57,361 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-23 05:10:57,363 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03. 2023-07-23 05:10:57,363 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a3320459f0e18e6a2df412cc7c24ce03: 2023-07-23 05:10:57,365 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:57,365 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=88 updating hbase:meta row=a3320459f0e18e6a2df412cc7c24ce03, regionState=CLOSED 2023-07-23 05:10:57,365 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690089057365"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089057365"}]},"ts":"1690089057365"} 2023-07-23 05:10:57,368 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=88 2023-07-23 05:10:57,368 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=88, state=SUCCESS; CloseRegionProcedure a3320459f0e18e6a2df412cc7c24ce03, server=jenkins-hbase4.apache.org,46173,1690089043304 in 168 msec 2023-07-23 05:10:57,370 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-23 05:10:57,370 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=a3320459f0e18e6a2df412cc7c24ce03, UNASSIGN in 173 msec 2023-07-23 05:10:57,371 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089057371"}]},"ts":"1690089057371"} 2023-07-23 05:10:57,372 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-23 05:10:57,375 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-23 05:10:57,379 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 189 msec 2023-07-23 05:10:57,420 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-23 05:10:57,496 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=87 2023-07-23 05:10:57,497 INFO [Listener at localhost/44477] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 87 completed 2023-07-23 05:10:57,498 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-23 05:10:57,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-23 05:10:57,503 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=90, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-23 05:10:57,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-23 05:10:57,506 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=90, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-23 05:10:57,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:57,508 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:57,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:10:57,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-23 05:10:57,514 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:57,516 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03/f, FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03/recovered.edits] 2023-07-23 05:10:57,531 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03/recovered.edits/10.seqid to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/archive/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03/recovered.edits/10.seqid 2023-07-23 05:10:57,531 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testFailRemoveGroup/a3320459f0e18e6a2df412cc7c24ce03 2023-07-23 05:10:57,532 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-23 05:10:57,535 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=90, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-23 05:10:57,538 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-23 05:10:57,540 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-23 05:10:57,542 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=90, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-23 05:10:57,542 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-23 05:10:57,542 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089057542"}]},"ts":"9223372036854775807"} 2023-07-23 05:10:57,546 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 05:10:57,546 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => a3320459f0e18e6a2df412cc7c24ce03, NAME => 'Group_testFailRemoveGroup,,1690089054497.a3320459f0e18e6a2df412cc7c24ce03.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 05:10:57,546 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-23 05:10:57,546 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690089057546"}]},"ts":"9223372036854775807"} 2023-07-23 05:10:57,549 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-23 05:10:57,551 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=90, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-23 05:10:57,552 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 53 msec 2023-07-23 05:10:57,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-23 05:10:57,613 INFO [Listener at localhost/44477] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-23 05:10:57,621 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:57,621 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:57,622 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:10:57,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:10:57,622 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:10:57,623 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:10:57,623 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:10:57,624 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:10:57,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:57,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:10:57,641 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:10:57,647 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:10:57,648 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:10:57,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:57,652 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:57,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:10:57,656 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:10:57,662 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:57,662 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:57,665 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37433] to rsgroup master 2023-07-23 05:10:57,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:10:57,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 342 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39966 deadline: 1690090257665, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. 2023-07-23 05:10:57,666 WARN [Listener at localhost/44477] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:10:57,668 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:10:57,670 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:57,670 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:57,671 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981, jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:46173], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:10:57,671 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:10:57,672 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:10:57,693 INFO [Listener at localhost/44477] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=511 (was 505) Potentially hanging thread: hconnection-0x42e89c26-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x42e89c26-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-596213111_17 at /127.0.0.1:42018 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-596213111_17 at /127.0.0.1:42954 [Waiting for operation #10] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1543526756_17 at /127.0.0.1:42004 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2db71259-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/cluster_fda3020b-7147-ecd1-5879-e2eb4316fd79/dfs/data/data3/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x42e89c26-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/cluster_fda3020b-7147-ecd1-5879-e2eb4316fd79/dfs/data/data4/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2db71259-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2db71259-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/cluster_fda3020b-7147-ecd1-5879-e2eb4316fd79/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x42e89c26-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/cluster_fda3020b-7147-ecd1-5879-e2eb4316fd79/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2db71259-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a5e2fc3-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x42e89c26-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2db71259-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x42e89c26-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=817 (was 816) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=534 (was 484) - SystemLoadAverage LEAK? -, ProcessCount=177 (was 177), AvailableMemoryMB=6516 (was 6673) 2023-07-23 05:10:57,693 WARN [Listener at localhost/44477] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-23 05:10:57,710 INFO [Listener at localhost/44477] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=511, OpenFileDescriptor=817, MaxFileDescriptor=60000, SystemLoadAverage=534, ProcessCount=177, AvailableMemoryMB=6515 2023-07-23 05:10:57,710 WARN [Listener at localhost/44477] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-23 05:10:57,711 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-23 05:10:57,717 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:57,717 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:57,718 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:10:57,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:10:57,719 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:10:57,719 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:10:57,719 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:10:57,720 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:10:57,723 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:57,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:10:57,725 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:10:57,728 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:10:57,729 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:10:57,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:57,732 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:57,735 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:10:57,737 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:10:57,742 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:57,742 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:57,744 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37433] to rsgroup master 2023-07-23 05:10:57,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:10:57,744 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 370 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39966 deadline: 1690090257744, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. 2023-07-23 05:10:57,745 WARN [Listener at localhost/44477] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:10:57,749 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:10:57,750 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:57,750 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:57,750 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981, jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:46173], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:10:57,751 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:10:57,751 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:10:57,752 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:10:57,752 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:10:57,753 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_633156038 2023-07-23 05:10:57,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_633156038 2023-07-23 05:10:57,757 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:57,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:57,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:10:57,760 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:10:57,763 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:57,763 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:57,766 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37441] to rsgroup Group_testMultiTableMove_633156038 2023-07-23 05:10:57,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_633156038 2023-07-23 05:10:57,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:57,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:57,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:10:57,772 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 05:10:57,772 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1690089043078] are moved back to default 2023-07-23 05:10:57,772 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_633156038 2023-07-23 05:10:57,772 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:10:57,775 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:10:57,775 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:10:57,778 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_633156038 2023-07-23 05:10:57,778 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:10:57,780 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 05:10:57,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 05:10:57,783 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 05:10:57,783 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 91 2023-07-23 05:10:57,784 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-23 05:10:57,785 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_633156038 2023-07-23 05:10:57,785 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:57,786 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:57,786 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:10:57,796 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 05:10:57,797 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:10:57,798 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e empty. 2023-07-23 05:10:57,798 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:10:57,798 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-23 05:10:57,825 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-23 05:10:57,826 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 98db655e7dc3903d1623460ffef4d21e, NAME => 'GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:10:57,846 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:57,846 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 98db655e7dc3903d1623460ffef4d21e, disabling compactions & flushes 2023-07-23 05:10:57,846 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. 2023-07-23 05:10:57,846 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. 2023-07-23 05:10:57,846 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. after waiting 0 ms 2023-07-23 05:10:57,846 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. 2023-07-23 05:10:57,846 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. 2023-07-23 05:10:57,846 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 98db655e7dc3903d1623460ffef4d21e: 2023-07-23 05:10:57,857 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 05:10:57,859 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690089057859"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089057859"}]},"ts":"1690089057859"} 2023-07-23 05:10:57,868 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 05:10:57,869 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 05:10:57,869 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089057869"}]},"ts":"1690089057869"} 2023-07-23 05:10:57,870 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-23 05:10:57,875 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:10:57,875 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:10:57,875 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:10:57,875 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:10:57,875 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:10:57,875 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98db655e7dc3903d1623460ffef4d21e, ASSIGN}] 2023-07-23 05:10:57,877 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98db655e7dc3903d1623460ffef4d21e, ASSIGN 2023-07-23 05:10:57,878 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=92, ppid=91, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98db655e7dc3903d1623460ffef4d21e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45681,1690089042835; forceNewPlan=false, retain=false 2023-07-23 05:10:57,885 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-23 05:10:58,028 INFO [jenkins-hbase4:37433] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 05:10:58,030 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=98db655e7dc3903d1623460ffef4d21e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:58,030 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690089058030"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089058030"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089058030"}]},"ts":"1690089058030"} 2023-07-23 05:10:58,032 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=93, ppid=92, state=RUNNABLE; OpenRegionProcedure 98db655e7dc3903d1623460ffef4d21e, server=jenkins-hbase4.apache.org,45681,1690089042835}] 2023-07-23 05:10:58,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-23 05:10:58,188 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. 2023-07-23 05:10:58,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 98db655e7dc3903d1623460ffef4d21e, NAME => 'GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:10:58,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:10:58,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:58,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:10:58,189 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:10:58,191 INFO [StoreOpener-98db655e7dc3903d1623460ffef4d21e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:10:58,192 DEBUG [StoreOpener-98db655e7dc3903d1623460ffef4d21e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e/f 2023-07-23 05:10:58,192 DEBUG [StoreOpener-98db655e7dc3903d1623460ffef4d21e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e/f 2023-07-23 05:10:58,193 INFO [StoreOpener-98db655e7dc3903d1623460ffef4d21e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 98db655e7dc3903d1623460ffef4d21e columnFamilyName f 2023-07-23 05:10:58,194 INFO [StoreOpener-98db655e7dc3903d1623460ffef4d21e-1] regionserver.HStore(310): Store=98db655e7dc3903d1623460ffef4d21e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:58,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:10:58,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:10:58,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:10:58,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:10:58,202 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 98db655e7dc3903d1623460ffef4d21e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9826127840, jitterRate=-0.08487053215503693}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:58,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 98db655e7dc3903d1623460ffef4d21e: 2023-07-23 05:10:58,203 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e., pid=93, masterSystemTime=1690089058183 2023-07-23 05:10:58,208 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. 2023-07-23 05:10:58,208 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=92 updating hbase:meta row=98db655e7dc3903d1623460ffef4d21e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:58,208 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. 2023-07-23 05:10:58,208 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690089058208"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089058208"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089058208"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089058208"}]},"ts":"1690089058208"} 2023-07-23 05:10:58,214 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=93, resume processing ppid=92 2023-07-23 05:10:58,214 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=93, ppid=92, state=SUCCESS; OpenRegionProcedure 98db655e7dc3903d1623460ffef4d21e, server=jenkins-hbase4.apache.org,45681,1690089042835 in 178 msec 2023-07-23 05:10:58,216 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-23 05:10:58,216 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98db655e7dc3903d1623460ffef4d21e, ASSIGN in 339 msec 2023-07-23 05:10:58,217 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 05:10:58,217 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089058217"}]},"ts":"1690089058217"} 2023-07-23 05:10:58,218 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-23 05:10:58,222 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=91, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 05:10:58,223 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=91, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 442 msec 2023-07-23 05:10:58,387 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=91 2023-07-23 05:10:58,388 INFO [Listener at localhost/44477] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 91 completed 2023-07-23 05:10:58,388 DEBUG [Listener at localhost/44477] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-23 05:10:58,388 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:10:58,395 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-23 05:10:58,395 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:10:58,395 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-23 05:10:58,397 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 05:10:58,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 05:10:58,401 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 05:10:58,403 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 94 2023-07-23 05:10:58,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-23 05:10:58,408 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_633156038 2023-07-23 05:10:58,409 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:58,410 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:58,410 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:10:58,414 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 05:10:58,416 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:10:58,417 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07 empty. 2023-07-23 05:10:58,418 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:10:58,418 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-23 05:10:58,445 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-23 05:10:58,447 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 412331af1604a7d157ad44cc6fc79a07, NAME => 'GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:10:58,472 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:58,472 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 412331af1604a7d157ad44cc6fc79a07, disabling compactions & flushes 2023-07-23 05:10:58,472 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. 2023-07-23 05:10:58,472 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. 2023-07-23 05:10:58,472 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. after waiting 0 ms 2023-07-23 05:10:58,472 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. 2023-07-23 05:10:58,472 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. 2023-07-23 05:10:58,472 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 412331af1604a7d157ad44cc6fc79a07: 2023-07-23 05:10:58,475 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 05:10:58,476 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690089058476"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089058476"}]},"ts":"1690089058476"} 2023-07-23 05:10:58,478 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 05:10:58,479 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 05:10:58,479 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089058479"}]},"ts":"1690089058479"} 2023-07-23 05:10:58,480 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-23 05:10:58,486 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:10:58,486 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:10:58,486 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:10:58,486 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:10:58,486 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:10:58,486 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=412331af1604a7d157ad44cc6fc79a07, ASSIGN}] 2023-07-23 05:10:58,488 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=412331af1604a7d157ad44cc6fc79a07, ASSIGN 2023-07-23 05:10:58,489 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=412331af1604a7d157ad44cc6fc79a07, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46173,1690089043304; forceNewPlan=false, retain=false 2023-07-23 05:10:58,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-23 05:10:58,639 INFO [jenkins-hbase4:37433] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 05:10:58,641 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=412331af1604a7d157ad44cc6fc79a07, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:58,641 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690089058641"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089058641"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089058641"}]},"ts":"1690089058641"} 2023-07-23 05:10:58,643 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure 412331af1604a7d157ad44cc6fc79a07, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:10:58,708 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-23 05:10:58,798 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. 2023-07-23 05:10:58,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 412331af1604a7d157ad44cc6fc79a07, NAME => 'GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:10:58,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:10:58,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:58,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:10:58,799 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:10:58,801 INFO [StoreOpener-412331af1604a7d157ad44cc6fc79a07-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:10:58,802 DEBUG [StoreOpener-412331af1604a7d157ad44cc6fc79a07-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07/f 2023-07-23 05:10:58,802 DEBUG [StoreOpener-412331af1604a7d157ad44cc6fc79a07-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07/f 2023-07-23 05:10:58,803 INFO [StoreOpener-412331af1604a7d157ad44cc6fc79a07-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 412331af1604a7d157ad44cc6fc79a07 columnFamilyName f 2023-07-23 05:10:58,804 INFO [StoreOpener-412331af1604a7d157ad44cc6fc79a07-1] regionserver.HStore(310): Store=412331af1604a7d157ad44cc6fc79a07/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:58,805 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:10:58,805 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:10:58,809 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:10:58,811 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:10:58,812 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 412331af1604a7d157ad44cc6fc79a07; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12024967200, jitterRate=0.11991234123706818}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:58,812 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 412331af1604a7d157ad44cc6fc79a07: 2023-07-23 05:10:58,813 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07., pid=96, masterSystemTime=1690089058794 2023-07-23 05:10:58,814 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. 2023-07-23 05:10:58,814 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. 2023-07-23 05:10:58,815 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=412331af1604a7d157ad44cc6fc79a07, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:58,815 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690089058815"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089058815"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089058815"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089058815"}]},"ts":"1690089058815"} 2023-07-23 05:10:58,818 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-23 05:10:58,818 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure 412331af1604a7d157ad44cc6fc79a07, server=jenkins-hbase4.apache.org,46173,1690089043304 in 173 msec 2023-07-23 05:10:58,823 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-23 05:10:58,823 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=412331af1604a7d157ad44cc6fc79a07, ASSIGN in 332 msec 2023-07-23 05:10:58,824 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 05:10:58,824 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089058824"}]},"ts":"1690089058824"} 2023-07-23 05:10:58,826 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-23 05:10:58,828 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 05:10:58,830 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 430 msec 2023-07-23 05:10:59,009 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-23 05:10:59,009 INFO [Listener at localhost/44477] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 94 completed 2023-07-23 05:10:59,010 DEBUG [Listener at localhost/44477] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-23 05:10:59,010 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:10:59,014 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-23 05:10:59,014 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:10:59,014 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-23 05:10:59,015 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:10:59,028 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-23 05:10:59,028 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 05:10:59,029 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-23 05:10:59,029 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 05:10:59,029 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_633156038 2023-07-23 05:10:59,033 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_633156038 2023-07-23 05:10:59,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_633156038 2023-07-23 05:10:59,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:10:59,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:10:59,037 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:10:59,039 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_633156038 2023-07-23 05:10:59,039 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(345): Moving region 412331af1604a7d157ad44cc6fc79a07 to RSGroup Group_testMultiTableMove_633156038 2023-07-23 05:10:59,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=412331af1604a7d157ad44cc6fc79a07, REOPEN/MOVE 2023-07-23 05:10:59,040 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_633156038 2023-07-23 05:10:59,040 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(345): Moving region 98db655e7dc3903d1623460ffef4d21e to RSGroup Group_testMultiTableMove_633156038 2023-07-23 05:10:59,041 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=412331af1604a7d157ad44cc6fc79a07, REOPEN/MOVE 2023-07-23 05:10:59,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98db655e7dc3903d1623460ffef4d21e, REOPEN/MOVE 2023-07-23 05:10:59,042 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_633156038, current retry=0 2023-07-23 05:10:59,043 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98db655e7dc3903d1623460ffef4d21e, REOPEN/MOVE 2023-07-23 05:10:59,043 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=412331af1604a7d157ad44cc6fc79a07, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:10:59,044 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690089059042"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089059042"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089059042"}]},"ts":"1690089059042"} 2023-07-23 05:10:59,044 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=98db655e7dc3903d1623460ffef4d21e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:10:59,044 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690089059044"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089059044"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089059044"}]},"ts":"1690089059044"} 2023-07-23 05:10:59,046 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=97, state=RUNNABLE; CloseRegionProcedure 412331af1604a7d157ad44cc6fc79a07, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:10:59,047 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=100, ppid=98, state=RUNNABLE; CloseRegionProcedure 98db655e7dc3903d1623460ffef4d21e, server=jenkins-hbase4.apache.org,45681,1690089042835}] 2023-07-23 05:10:59,340 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:10:59,340 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:10:59,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 98db655e7dc3903d1623460ffef4d21e, disabling compactions & flushes 2023-07-23 05:10:59,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 412331af1604a7d157ad44cc6fc79a07, disabling compactions & flushes 2023-07-23 05:10:59,341 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. 2023-07-23 05:10:59,341 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. 2023-07-23 05:10:59,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. 2023-07-23 05:10:59,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. 2023-07-23 05:10:59,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. after waiting 0 ms 2023-07-23 05:10:59,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. after waiting 0 ms 2023-07-23 05:10:59,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. 2023-07-23 05:10:59,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. 2023-07-23 05:10:59,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:10:59,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:10:59,348 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. 2023-07-23 05:10:59,348 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 412331af1604a7d157ad44cc6fc79a07: 2023-07-23 05:10:59,348 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. 2023-07-23 05:10:59,348 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 412331af1604a7d157ad44cc6fc79a07 move to jenkins-hbase4.apache.org,37441,1690089043078 record at close sequenceid=2 2023-07-23 05:10:59,348 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 98db655e7dc3903d1623460ffef4d21e: 2023-07-23 05:10:59,348 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 98db655e7dc3903d1623460ffef4d21e move to jenkins-hbase4.apache.org,37441,1690089043078 record at close sequenceid=2 2023-07-23 05:10:59,350 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:10:59,350 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=412331af1604a7d157ad44cc6fc79a07, regionState=CLOSED 2023-07-23 05:10:59,351 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690089059350"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089059350"}]},"ts":"1690089059350"} 2023-07-23 05:10:59,351 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:10:59,352 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=98db655e7dc3903d1623460ffef4d21e, regionState=CLOSED 2023-07-23 05:10:59,352 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690089059352"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089059352"}]},"ts":"1690089059352"} 2023-07-23 05:10:59,354 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=97 2023-07-23 05:10:59,354 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=97, state=SUCCESS; CloseRegionProcedure 412331af1604a7d157ad44cc6fc79a07, server=jenkins-hbase4.apache.org,46173,1690089043304 in 307 msec 2023-07-23 05:10:59,355 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=412331af1604a7d157ad44cc6fc79a07, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37441,1690089043078; forceNewPlan=false, retain=false 2023-07-23 05:10:59,355 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=100, resume processing ppid=98 2023-07-23 05:10:59,355 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=100, ppid=98, state=SUCCESS; CloseRegionProcedure 98db655e7dc3903d1623460ffef4d21e, server=jenkins-hbase4.apache.org,45681,1690089042835 in 307 msec 2023-07-23 05:10:59,356 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=98, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98db655e7dc3903d1623460ffef4d21e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,37441,1690089043078; forceNewPlan=false, retain=false 2023-07-23 05:10:59,506 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=412331af1604a7d157ad44cc6fc79a07, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:59,506 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=98db655e7dc3903d1623460ffef4d21e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:59,506 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690089059506"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089059506"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089059506"}]},"ts":"1690089059506"} 2023-07-23 05:10:59,506 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690089059506"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089059506"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089059506"}]},"ts":"1690089059506"} 2023-07-23 05:10:59,508 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=101, ppid=97, state=RUNNABLE; OpenRegionProcedure 412331af1604a7d157ad44cc6fc79a07, server=jenkins-hbase4.apache.org,37441,1690089043078}] 2023-07-23 05:10:59,509 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=98, state=RUNNABLE; OpenRegionProcedure 98db655e7dc3903d1623460ffef4d21e, server=jenkins-hbase4.apache.org,37441,1690089043078}] 2023-07-23 05:10:59,665 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. 2023-07-23 05:10:59,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 98db655e7dc3903d1623460ffef4d21e, NAME => 'GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:10:59,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:10:59,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:59,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:10:59,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:10:59,667 INFO [StoreOpener-98db655e7dc3903d1623460ffef4d21e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:10:59,668 DEBUG [StoreOpener-98db655e7dc3903d1623460ffef4d21e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e/f 2023-07-23 05:10:59,668 DEBUG [StoreOpener-98db655e7dc3903d1623460ffef4d21e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e/f 2023-07-23 05:10:59,669 INFO [StoreOpener-98db655e7dc3903d1623460ffef4d21e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 98db655e7dc3903d1623460ffef4d21e columnFamilyName f 2023-07-23 05:10:59,669 INFO [StoreOpener-98db655e7dc3903d1623460ffef4d21e-1] regionserver.HStore(310): Store=98db655e7dc3903d1623460ffef4d21e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:59,670 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:10:59,671 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:10:59,674 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:10:59,675 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 98db655e7dc3903d1623460ffef4d21e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9689780160, jitterRate=-0.09756889939308167}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:59,675 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 98db655e7dc3903d1623460ffef4d21e: 2023-07-23 05:10:59,675 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e., pid=102, masterSystemTime=1690089059661 2023-07-23 05:10:59,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. 2023-07-23 05:10:59,677 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. 2023-07-23 05:10:59,677 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. 2023-07-23 05:10:59,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 412331af1604a7d157ad44cc6fc79a07, NAME => 'GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:10:59,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:10:59,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:10:59,677 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=98db655e7dc3903d1623460ffef4d21e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:59,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:10:59,678 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:10:59,678 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690089059677"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089059677"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089059677"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089059677"}]},"ts":"1690089059677"} 2023-07-23 05:10:59,679 INFO [StoreOpener-412331af1604a7d157ad44cc6fc79a07-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:10:59,680 DEBUG [StoreOpener-412331af1604a7d157ad44cc6fc79a07-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07/f 2023-07-23 05:10:59,680 DEBUG [StoreOpener-412331af1604a7d157ad44cc6fc79a07-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07/f 2023-07-23 05:10:59,681 INFO [StoreOpener-412331af1604a7d157ad44cc6fc79a07-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 412331af1604a7d157ad44cc6fc79a07 columnFamilyName f 2023-07-23 05:10:59,681 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=98 2023-07-23 05:10:59,681 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=98, state=SUCCESS; OpenRegionProcedure 98db655e7dc3903d1623460ffef4d21e, server=jenkins-hbase4.apache.org,37441,1690089043078 in 170 msec 2023-07-23 05:10:59,681 INFO [StoreOpener-412331af1604a7d157ad44cc6fc79a07-1] regionserver.HStore(310): Store=412331af1604a7d157ad44cc6fc79a07/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:10:59,682 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=98, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98db655e7dc3903d1623460ffef4d21e, REOPEN/MOVE in 640 msec 2023-07-23 05:10:59,682 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:10:59,683 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:10:59,686 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:10:59,687 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 412331af1604a7d157ad44cc6fc79a07; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9699994560, jitterRate=-0.09661760926246643}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:10:59,687 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 412331af1604a7d157ad44cc6fc79a07: 2023-07-23 05:10:59,687 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07., pid=101, masterSystemTime=1690089059661 2023-07-23 05:10:59,688 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. 2023-07-23 05:10:59,688 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. 2023-07-23 05:10:59,689 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=97 updating hbase:meta row=412331af1604a7d157ad44cc6fc79a07, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:10:59,689 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690089059689"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089059689"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089059689"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089059689"}]},"ts":"1690089059689"} 2023-07-23 05:10:59,692 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=101, resume processing ppid=97 2023-07-23 05:10:59,692 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=101, ppid=97, state=SUCCESS; OpenRegionProcedure 412331af1604a7d157ad44cc6fc79a07, server=jenkins-hbase4.apache.org,37441,1690089043078 in 182 msec 2023-07-23 05:10:59,693 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=412331af1604a7d157ad44cc6fc79a07, REOPEN/MOVE in 653 msec 2023-07-23 05:11:00,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure.ProcedureSyncWait(216): waitFor pid=97 2023-07-23 05:11:00,044 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_633156038. 2023-07-23 05:11:00,044 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:00,049 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:00,050 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:00,053 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-23 05:11:00,053 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 05:11:00,054 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-23 05:11:00,054 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 05:11:00,055 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:00,055 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:00,056 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_633156038 2023-07-23 05:11:00,056 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:00,058 INFO [Listener at localhost/44477] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-23 05:11:00,058 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-23 05:11:00,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=103, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 05:11:00,062 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-23 05:11:00,062 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089060062"}]},"ts":"1690089060062"} 2023-07-23 05:11:00,063 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-23 05:11:00,065 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-23 05:11:00,066 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98db655e7dc3903d1623460ffef4d21e, UNASSIGN}] 2023-07-23 05:11:00,067 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=104, ppid=103, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98db655e7dc3903d1623460ffef4d21e, UNASSIGN 2023-07-23 05:11:00,068 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=98db655e7dc3903d1623460ffef4d21e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:11:00,068 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690089060068"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089060068"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089060068"}]},"ts":"1690089060068"} 2023-07-23 05:11:00,069 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=104, state=RUNNABLE; CloseRegionProcedure 98db655e7dc3903d1623460ffef4d21e, server=jenkins-hbase4.apache.org,37441,1690089043078}] 2023-07-23 05:11:00,163 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-23 05:11:00,221 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:11:00,223 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 98db655e7dc3903d1623460ffef4d21e, disabling compactions & flushes 2023-07-23 05:11:00,223 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. 2023-07-23 05:11:00,223 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. 2023-07-23 05:11:00,223 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. after waiting 0 ms 2023-07-23 05:11:00,223 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. 2023-07-23 05:11:00,231 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 05:11:00,232 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e. 2023-07-23 05:11:00,232 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 98db655e7dc3903d1623460ffef4d21e: 2023-07-23 05:11:00,234 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:11:00,235 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=104 updating hbase:meta row=98db655e7dc3903d1623460ffef4d21e, regionState=CLOSED 2023-07-23 05:11:00,235 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690089060234"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089060234"}]},"ts":"1690089060234"} 2023-07-23 05:11:00,244 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=104 2023-07-23 05:11:00,245 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=104, state=SUCCESS; CloseRegionProcedure 98db655e7dc3903d1623460ffef4d21e, server=jenkins-hbase4.apache.org,37441,1690089043078 in 173 msec 2023-07-23 05:11:00,247 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=103 2023-07-23 05:11:00,247 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=103, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=98db655e7dc3903d1623460ffef4d21e, UNASSIGN in 179 msec 2023-07-23 05:11:00,248 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089060248"}]},"ts":"1690089060248"} 2023-07-23 05:11:00,250 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-23 05:11:00,253 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-23 05:11:00,254 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=103, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 194 msec 2023-07-23 05:11:00,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=103 2023-07-23 05:11:00,365 INFO [Listener at localhost/44477] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 103 completed 2023-07-23 05:11:00,365 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-23 05:11:00,366 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 05:11:00,369 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=106, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 05:11:00,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_633156038' 2023-07-23 05:11:00,370 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=106, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 05:11:00,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_633156038 2023-07-23 05:11:00,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:00,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:00,373 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:11:00,376 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:11:00,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-23 05:11:00,378 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e/f, FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e/recovered.edits] 2023-07-23 05:11:00,386 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e/recovered.edits/7.seqid to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/archive/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e/recovered.edits/7.seqid 2023-07-23 05:11:00,387 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/GrouptestMultiTableMoveA/98db655e7dc3903d1623460ffef4d21e 2023-07-23 05:11:00,387 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-23 05:11:00,390 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=106, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 05:11:00,393 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-23 05:11:00,396 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-23 05:11:00,398 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=106, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 05:11:00,398 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-23 05:11:00,398 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089060398"}]},"ts":"9223372036854775807"} 2023-07-23 05:11:00,405 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 05:11:00,405 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 98db655e7dc3903d1623460ffef4d21e, NAME => 'GrouptestMultiTableMoveA,,1690089057779.98db655e7dc3903d1623460ffef4d21e.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 05:11:00,405 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-23 05:11:00,405 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690089060405"}]},"ts":"9223372036854775807"} 2023-07-23 05:11:00,407 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-23 05:11:00,409 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=106, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-23 05:11:00,410 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 44 msec 2023-07-23 05:11:00,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-23 05:11:00,479 INFO [Listener at localhost/44477] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-23 05:11:00,480 INFO [Listener at localhost/44477] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-23 05:11:00,480 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-23 05:11:00,481 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=107, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 05:11:00,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-23 05:11:00,485 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089060485"}]},"ts":"1690089060485"} 2023-07-23 05:11:00,486 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-23 05:11:00,488 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-23 05:11:00,492 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=412331af1604a7d157ad44cc6fc79a07, UNASSIGN}] 2023-07-23 05:11:00,493 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=108, ppid=107, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=412331af1604a7d157ad44cc6fc79a07, UNASSIGN 2023-07-23 05:11:00,494 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=412331af1604a7d157ad44cc6fc79a07, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:11:00,494 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690089060494"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089060494"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089060494"}]},"ts":"1690089060494"} 2023-07-23 05:11:00,499 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=109, ppid=108, state=RUNNABLE; CloseRegionProcedure 412331af1604a7d157ad44cc6fc79a07, server=jenkins-hbase4.apache.org,37441,1690089043078}] 2023-07-23 05:11:00,586 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-23 05:11:00,651 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:11:00,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 412331af1604a7d157ad44cc6fc79a07, disabling compactions & flushes 2023-07-23 05:11:00,652 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. 2023-07-23 05:11:00,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. 2023-07-23 05:11:00,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. after waiting 0 ms 2023-07-23 05:11:00,652 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. 2023-07-23 05:11:00,656 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 05:11:00,657 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07. 2023-07-23 05:11:00,657 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 412331af1604a7d157ad44cc6fc79a07: 2023-07-23 05:11:00,658 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:11:00,659 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=108 updating hbase:meta row=412331af1604a7d157ad44cc6fc79a07, regionState=CLOSED 2023-07-23 05:11:00,659 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690089060659"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089060659"}]},"ts":"1690089060659"} 2023-07-23 05:11:00,662 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=109, resume processing ppid=108 2023-07-23 05:11:00,662 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=109, ppid=108, state=SUCCESS; CloseRegionProcedure 412331af1604a7d157ad44cc6fc79a07, server=jenkins-hbase4.apache.org,37441,1690089043078 in 161 msec 2023-07-23 05:11:00,664 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-23 05:11:00,664 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=412331af1604a7d157ad44cc6fc79a07, UNASSIGN in 173 msec 2023-07-23 05:11:00,664 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089060664"}]},"ts":"1690089060664"} 2023-07-23 05:11:00,665 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-23 05:11:00,668 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-23 05:11:00,670 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=107, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 188 msec 2023-07-23 05:11:00,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=107 2023-07-23 05:11:00,787 INFO [Listener at localhost/44477] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 107 completed 2023-07-23 05:11:00,788 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-23 05:11:00,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 05:11:00,791 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=110, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 05:11:00,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_633156038' 2023-07-23 05:11:00,791 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=110, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 05:11:00,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_633156038 2023-07-23 05:11:00,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:00,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:00,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:11:00,795 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:11:00,797 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07/f, FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07/recovered.edits] 2023-07-23 05:11:00,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-23 05:11:00,803 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07/recovered.edits/7.seqid to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/archive/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07/recovered.edits/7.seqid 2023-07-23 05:11:00,803 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/GrouptestMultiTableMoveB/412331af1604a7d157ad44cc6fc79a07 2023-07-23 05:11:00,803 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-23 05:11:00,806 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=110, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 05:11:00,808 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-23 05:11:00,809 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-23 05:11:00,810 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=110, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 05:11:00,810 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-23 05:11:00,810 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089060810"}]},"ts":"9223372036854775807"} 2023-07-23 05:11:00,812 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 05:11:00,812 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 412331af1604a7d157ad44cc6fc79a07, NAME => 'GrouptestMultiTableMoveB,,1690089058397.412331af1604a7d157ad44cc6fc79a07.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 05:11:00,812 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-23 05:11:00,812 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690089060812"}]},"ts":"9223372036854775807"} 2023-07-23 05:11:00,813 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-23 05:11:00,815 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=110, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-23 05:11:00,816 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 27 msec 2023-07-23 05:11:00,901 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-23 05:11:00,902 INFO [Listener at localhost/44477] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-23 05:11:00,905 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:00,905 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:00,906 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:00,906 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:00,906 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:00,908 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37441] to rsgroup default 2023-07-23 05:11:00,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_633156038 2023-07-23 05:11:00,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:00,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:00,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:11:00,913 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_633156038, current retry=0 2023-07-23 05:11:00,913 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1690089043078] are moved back to Group_testMultiTableMove_633156038 2023-07-23 05:11:00,913 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_633156038 => default 2023-07-23 05:11:00,913 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:00,914 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_633156038 2023-07-23 05:11:00,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:00,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:00,919 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 05:11:00,921 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:00,922 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:00,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:00,922 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:00,923 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:11:00,923 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:00,924 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:11:00,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:00,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:11:00,930 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:00,934 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:11:00,934 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:11:00,936 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:00,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:00,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:00,940 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:00,943 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:00,944 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:00,945 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37433] to rsgroup master 2023-07-23 05:11:00,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:00,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 508 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39966 deadline: 1690090260945, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. 2023-07-23 05:11:00,946 WARN [Listener at localhost/44477] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:11:00,949 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:00,949 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:00,949 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:00,950 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981, jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:46173], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:11:00,950 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:00,951 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:00,969 INFO [Listener at localhost/44477] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=508 (was 511), OpenFileDescriptor=815 (was 817), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=534 (was 534), ProcessCount=177 (was 177), AvailableMemoryMB=6354 (was 6515) 2023-07-23 05:11:00,969 WARN [Listener at localhost/44477] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-23 05:11:00,987 INFO [Listener at localhost/44477] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=508, OpenFileDescriptor=815, MaxFileDescriptor=60000, SystemLoadAverage=534, ProcessCount=177, AvailableMemoryMB=6354 2023-07-23 05:11:00,987 WARN [Listener at localhost/44477] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-23 05:11:00,987 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-23 05:11:00,990 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:00,991 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:00,991 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:00,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:00,991 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:00,992 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:11:00,992 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:00,993 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:11:00,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:00,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:11:00,999 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:01,001 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:11:01,001 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:11:01,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:01,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:01,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:01,006 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:01,009 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:01,009 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:01,011 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37433] to rsgroup master 2023-07-23 05:11:01,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:01,011 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 536 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39966 deadline: 1690090261011, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. 2023-07-23 05:11:01,012 WARN [Listener at localhost/44477] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:11:01,013 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:01,013 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:01,014 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:01,014 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981, jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:46173], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:11:01,015 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:01,015 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:01,016 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:01,016 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:01,016 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-23 05:11:01,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:01,018 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 05:11:01,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:01,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:11:01,025 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:01,028 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:01,028 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:01,030 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981] to rsgroup oldGroup 2023-07-23 05:11:01,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:01,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 05:11:01,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:01,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:11:01,034 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 05:11:01,035 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1690089043078, jenkins-hbase4.apache.org,41981,1690089047062] are moved back to default 2023-07-23 05:11:01,035 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-23 05:11:01,035 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:01,037 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:01,037 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:01,039 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-23 05:11:01,039 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:01,040 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-23 05:11:01,040 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:01,041 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:01,041 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:01,042 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-23 05:11:01,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:01,044 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-23 05:11:01,045 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 05:11:01,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:01,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 05:11:01,049 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:01,051 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:01,051 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:01,053 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45681] to rsgroup anotherRSGroup 2023-07-23 05:11:01,055 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:01,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-23 05:11:01,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 05:11:01,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:01,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 05:11:01,058 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 05:11:01,058 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,45681,1690089042835] are moved back to default 2023-07-23 05:11:01,059 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-23 05:11:01,059 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:01,061 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:01,061 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:01,063 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-23 05:11:01,063 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:01,064 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-23 05:11:01,064 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:01,070 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-23 05:11:01,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:01,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 570 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:39966 deadline: 1690090261069, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-23 05:11:01,072 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-23 05:11:01,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:01,072 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:39966 deadline: 1690090261071, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-23 05:11:01,073 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-23 05:11:01,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:01,073 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:39966 deadline: 1690090261073, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-23 05:11:01,074 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-23 05:11:01,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:01,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:39966 deadline: 1690090261074, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-23 05:11:01,078 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:01,078 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:01,080 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:01,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:01,080 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:01,081 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45681] to rsgroup default 2023-07-23 05:11:01,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:01,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-23 05:11:01,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 05:11:01,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:01,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 05:11:01,092 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-23 05:11:01,092 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,45681,1690089042835] are moved back to anotherRSGroup 2023-07-23 05:11:01,092 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-23 05:11:01,092 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:01,094 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-23 05:11:01,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:01,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 05:11:01,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:01,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-23 05:11:01,104 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:01,106 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:01,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:01,106 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:01,107 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981] to rsgroup default 2023-07-23 05:11:01,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:01,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-23 05:11:01,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:01,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:11:01,112 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-23 05:11:01,112 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1690089043078, jenkins-hbase4.apache.org,41981,1690089047062] are moved back to oldGroup 2023-07-23 05:11:01,112 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-23 05:11:01,112 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:01,113 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-23 05:11:01,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:01,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:01,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 05:11:01,119 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:01,121 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:01,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:01,121 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:01,122 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:11:01,122 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:01,123 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:11:01,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:01,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:11:01,130 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:01,139 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:11:01,139 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:11:01,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:01,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:01,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:01,146 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:01,149 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:01,149 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:01,151 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37433] to rsgroup master 2023-07-23 05:11:01,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:01,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 612 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39966 deadline: 1690090261151, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. 2023-07-23 05:11:01,152 WARN [Listener at localhost/44477] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:11:01,154 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:01,154 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:01,154 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:01,155 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981, jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:46173], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:11:01,155 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:01,155 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:01,172 INFO [Listener at localhost/44477] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=512 (was 508) Potentially hanging thread: hconnection-0x2db71259-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2db71259-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2db71259-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2db71259-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=815 (was 815), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=534 (was 534), ProcessCount=177 (was 177), AvailableMemoryMB=6356 (was 6354) - AvailableMemoryMB LEAK? - 2023-07-23 05:11:01,173 WARN [Listener at localhost/44477] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-23 05:11:01,189 INFO [Listener at localhost/44477] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=512, OpenFileDescriptor=815, MaxFileDescriptor=60000, SystemLoadAverage=534, ProcessCount=177, AvailableMemoryMB=6356 2023-07-23 05:11:01,189 WARN [Listener at localhost/44477] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-23 05:11:01,189 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-23 05:11:01,193 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:01,193 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:01,194 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:01,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:01,194 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:01,195 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:11:01,195 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:01,196 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:11:01,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:01,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:11:01,201 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:01,203 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:11:01,204 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:11:01,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:01,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:01,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:01,210 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:01,212 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:01,212 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:01,214 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37433] to rsgroup master 2023-07-23 05:11:01,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:01,214 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 640 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39966 deadline: 1690090261214, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. 2023-07-23 05:11:01,215 WARN [Listener at localhost/44477] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:11:01,216 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:01,217 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:01,217 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:01,217 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981, jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:46173], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:11:01,218 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:01,218 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:01,218 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:01,218 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:01,219 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-23 05:11:01,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 05:11:01,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:01,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:01,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:11:01,227 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:01,229 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:01,229 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:01,231 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981] to rsgroup oldgroup 2023-07-23 05:11:01,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 05:11:01,233 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:01,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:01,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:11:01,237 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 05:11:01,237 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1690089043078, jenkins-hbase4.apache.org,41981,1690089047062] are moved back to default 2023-07-23 05:11:01,237 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-23 05:11:01,237 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:01,239 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:01,239 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:01,242 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-23 05:11:01,242 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:01,244 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 05:11:01,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=111, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-23 05:11:01,251 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 05:11:01,251 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 111 2023-07-23 05:11:01,253 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 05:11:01,254 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:01,254 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:01,255 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:11:01,258 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 05:11:01,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-23 05:11:01,260 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/testRename/be20752d9b84181a034cc1472048379b 2023-07-23 05:11:01,261 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/testRename/be20752d9b84181a034cc1472048379b empty. 2023-07-23 05:11:01,262 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/testRename/be20752d9b84181a034cc1472048379b 2023-07-23 05:11:01,262 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-23 05:11:01,285 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-23 05:11:01,287 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => be20752d9b84181a034cc1472048379b, NAME => 'testRename,,1690089061244.be20752d9b84181a034cc1472048379b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:11:01,307 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1690089061244.be20752d9b84181a034cc1472048379b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:01,307 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing be20752d9b84181a034cc1472048379b, disabling compactions & flushes 2023-07-23 05:11:01,308 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:01,308 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:01,308 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690089061244.be20752d9b84181a034cc1472048379b. after waiting 0 ms 2023-07-23 05:11:01,308 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:01,308 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:01,308 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for be20752d9b84181a034cc1472048379b: 2023-07-23 05:11:01,310 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 05:11:01,311 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1690089061244.be20752d9b84181a034cc1472048379b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690089061311"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089061311"}]},"ts":"1690089061311"} 2023-07-23 05:11:01,316 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 05:11:01,317 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 05:11:01,317 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089061317"}]},"ts":"1690089061317"} 2023-07-23 05:11:01,319 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-23 05:11:01,322 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:11:01,322 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:11:01,322 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:11:01,322 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:11:01,323 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=be20752d9b84181a034cc1472048379b, ASSIGN}] 2023-07-23 05:11:01,325 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=be20752d9b84181a034cc1472048379b, ASSIGN 2023-07-23 05:11:01,326 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=112, ppid=111, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=be20752d9b84181a034cc1472048379b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46173,1690089043304; forceNewPlan=false, retain=false 2023-07-23 05:11:01,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-23 05:11:01,476 INFO [jenkins-hbase4:37433] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 05:11:01,477 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=112 updating hbase:meta row=be20752d9b84181a034cc1472048379b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:11:01,477 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690089061244.be20752d9b84181a034cc1472048379b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690089061477"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089061477"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089061477"}]},"ts":"1690089061477"} 2023-07-23 05:11:01,479 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=113, ppid=112, state=RUNNABLE; OpenRegionProcedure be20752d9b84181a034cc1472048379b, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:11:01,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-23 05:11:01,635 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:01,635 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => be20752d9b84181a034cc1472048379b, NAME => 'testRename,,1690089061244.be20752d9b84181a034cc1472048379b.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:11:01,635 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename be20752d9b84181a034cc1472048379b 2023-07-23 05:11:01,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690089061244.be20752d9b84181a034cc1472048379b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:01,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for be20752d9b84181a034cc1472048379b 2023-07-23 05:11:01,636 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for be20752d9b84181a034cc1472048379b 2023-07-23 05:11:01,637 INFO [StoreOpener-be20752d9b84181a034cc1472048379b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region be20752d9b84181a034cc1472048379b 2023-07-23 05:11:01,638 DEBUG [StoreOpener-be20752d9b84181a034cc1472048379b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/testRename/be20752d9b84181a034cc1472048379b/tr 2023-07-23 05:11:01,639 DEBUG [StoreOpener-be20752d9b84181a034cc1472048379b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/testRename/be20752d9b84181a034cc1472048379b/tr 2023-07-23 05:11:01,639 INFO [StoreOpener-be20752d9b84181a034cc1472048379b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region be20752d9b84181a034cc1472048379b columnFamilyName tr 2023-07-23 05:11:01,640 INFO [StoreOpener-be20752d9b84181a034cc1472048379b-1] regionserver.HStore(310): Store=be20752d9b84181a034cc1472048379b/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:01,641 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/testRename/be20752d9b84181a034cc1472048379b 2023-07-23 05:11:01,641 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/testRename/be20752d9b84181a034cc1472048379b 2023-07-23 05:11:01,644 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for be20752d9b84181a034cc1472048379b 2023-07-23 05:11:01,646 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/testRename/be20752d9b84181a034cc1472048379b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:11:01,647 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened be20752d9b84181a034cc1472048379b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10084211680, jitterRate=-0.060834601521492004}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:11:01,647 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for be20752d9b84181a034cc1472048379b: 2023-07-23 05:11:01,647 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690089061244.be20752d9b84181a034cc1472048379b., pid=113, masterSystemTime=1690089061631 2023-07-23 05:11:01,649 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:01,649 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:01,649 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=112 updating hbase:meta row=be20752d9b84181a034cc1472048379b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:11:01,649 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690089061244.be20752d9b84181a034cc1472048379b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690089061649"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089061649"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089061649"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089061649"}]},"ts":"1690089061649"} 2023-07-23 05:11:01,653 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=113, resume processing ppid=112 2023-07-23 05:11:01,653 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=113, ppid=112, state=SUCCESS; OpenRegionProcedure be20752d9b84181a034cc1472048379b, server=jenkins-hbase4.apache.org,46173,1690089043304 in 172 msec 2023-07-23 05:11:01,655 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-23 05:11:01,655 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=be20752d9b84181a034cc1472048379b, ASSIGN in 330 msec 2023-07-23 05:11:01,655 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 05:11:01,655 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089061655"}]},"ts":"1690089061655"} 2023-07-23 05:11:01,656 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-23 05:11:01,660 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=111, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 05:11:01,661 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=111, state=SUCCESS; CreateTableProcedure table=testRename in 416 msec 2023-07-23 05:11:01,827 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-23 05:11:01,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=111 2023-07-23 05:11:01,863 INFO [Listener at localhost/44477] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 111 completed 2023-07-23 05:11:01,863 DEBUG [Listener at localhost/44477] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-23 05:11:01,863 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:01,868 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-23 05:11:01,868 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:01,868 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-23 05:11:01,870 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-23 05:11:01,872 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 05:11:01,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:01,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:01,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:11:01,875 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-23 05:11:01,875 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(345): Moving region be20752d9b84181a034cc1472048379b to RSGroup oldgroup 2023-07-23 05:11:01,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:11:01,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:11:01,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:11:01,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:11:01,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:11:01,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=be20752d9b84181a034cc1472048379b, REOPEN/MOVE 2023-07-23 05:11:01,877 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-23 05:11:01,877 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=be20752d9b84181a034cc1472048379b, REOPEN/MOVE 2023-07-23 05:11:01,878 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=be20752d9b84181a034cc1472048379b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:11:01,878 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690089061244.be20752d9b84181a034cc1472048379b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690089061878"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089061878"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089061878"}]},"ts":"1690089061878"} 2023-07-23 05:11:01,879 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE; CloseRegionProcedure be20752d9b84181a034cc1472048379b, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:11:02,032 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close be20752d9b84181a034cc1472048379b 2023-07-23 05:11:02,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing be20752d9b84181a034cc1472048379b, disabling compactions & flushes 2023-07-23 05:11:02,034 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:02,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:02,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690089061244.be20752d9b84181a034cc1472048379b. after waiting 0 ms 2023-07-23 05:11:02,034 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:02,038 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/testRename/be20752d9b84181a034cc1472048379b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:11:02,040 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:02,040 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for be20752d9b84181a034cc1472048379b: 2023-07-23 05:11:02,040 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding be20752d9b84181a034cc1472048379b move to jenkins-hbase4.apache.org,41981,1690089047062 record at close sequenceid=2 2023-07-23 05:11:02,042 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed be20752d9b84181a034cc1472048379b 2023-07-23 05:11:02,042 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=be20752d9b84181a034cc1472048379b, regionState=CLOSED 2023-07-23 05:11:02,042 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690089061244.be20752d9b84181a034cc1472048379b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690089062042"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089062042"}]},"ts":"1690089062042"} 2023-07-23 05:11:02,046 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-23 05:11:02,046 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; CloseRegionProcedure be20752d9b84181a034cc1472048379b, server=jenkins-hbase4.apache.org,46173,1690089043304 in 164 msec 2023-07-23 05:11:02,047 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=be20752d9b84181a034cc1472048379b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41981,1690089047062; forceNewPlan=false, retain=false 2023-07-23 05:11:02,197 INFO [jenkins-hbase4:37433] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 05:11:02,197 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=be20752d9b84181a034cc1472048379b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:11:02,197 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690089061244.be20752d9b84181a034cc1472048379b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690089062197"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089062197"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089062197"}]},"ts":"1690089062197"} 2023-07-23 05:11:02,199 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=114, state=RUNNABLE; OpenRegionProcedure be20752d9b84181a034cc1472048379b, server=jenkins-hbase4.apache.org,41981,1690089047062}] 2023-07-23 05:11:02,355 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:02,355 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => be20752d9b84181a034cc1472048379b, NAME => 'testRename,,1690089061244.be20752d9b84181a034cc1472048379b.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:11:02,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename be20752d9b84181a034cc1472048379b 2023-07-23 05:11:02,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690089061244.be20752d9b84181a034cc1472048379b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:02,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for be20752d9b84181a034cc1472048379b 2023-07-23 05:11:02,356 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for be20752d9b84181a034cc1472048379b 2023-07-23 05:11:02,359 INFO [StoreOpener-be20752d9b84181a034cc1472048379b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region be20752d9b84181a034cc1472048379b 2023-07-23 05:11:02,360 DEBUG [StoreOpener-be20752d9b84181a034cc1472048379b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/testRename/be20752d9b84181a034cc1472048379b/tr 2023-07-23 05:11:02,360 DEBUG [StoreOpener-be20752d9b84181a034cc1472048379b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/testRename/be20752d9b84181a034cc1472048379b/tr 2023-07-23 05:11:02,360 INFO [StoreOpener-be20752d9b84181a034cc1472048379b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region be20752d9b84181a034cc1472048379b columnFamilyName tr 2023-07-23 05:11:02,361 INFO [StoreOpener-be20752d9b84181a034cc1472048379b-1] regionserver.HStore(310): Store=be20752d9b84181a034cc1472048379b/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:02,362 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/testRename/be20752d9b84181a034cc1472048379b 2023-07-23 05:11:02,363 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/testRename/be20752d9b84181a034cc1472048379b 2023-07-23 05:11:02,367 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for be20752d9b84181a034cc1472048379b 2023-07-23 05:11:02,368 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened be20752d9b84181a034cc1472048379b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9640164640, jitterRate=-0.10218970477581024}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:11:02,368 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for be20752d9b84181a034cc1472048379b: 2023-07-23 05:11:02,369 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690089061244.be20752d9b84181a034cc1472048379b., pid=116, masterSystemTime=1690089062351 2023-07-23 05:11:02,372 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:02,372 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:02,372 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=114 updating hbase:meta row=be20752d9b84181a034cc1472048379b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:11:02,373 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690089061244.be20752d9b84181a034cc1472048379b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690089062372"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089062372"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089062372"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089062372"}]},"ts":"1690089062372"} 2023-07-23 05:11:02,375 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=114 2023-07-23 05:11:02,376 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=114, state=SUCCESS; OpenRegionProcedure be20752d9b84181a034cc1472048379b, server=jenkins-hbase4.apache.org,41981,1690089047062 in 175 msec 2023-07-23 05:11:02,377 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=be20752d9b84181a034cc1472048379b, REOPEN/MOVE in 499 msec 2023-07-23 05:11:02,877 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure.ProcedureSyncWait(216): waitFor pid=114 2023-07-23 05:11:02,877 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-23 05:11:02,877 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:02,880 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:02,880 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:02,883 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:02,883 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-23 05:11:02,884 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 05:11:02,885 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-23 05:11:02,885 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:02,885 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-23 05:11:02,886 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 05:11:02,887 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:02,887 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:02,888 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-23 05:11:02,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 05:11:02,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 05:11:02,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:02,892 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:02,893 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 05:11:02,894 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:02,897 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:02,897 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:02,900 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45681] to rsgroup normal 2023-07-23 05:11:02,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 05:11:02,902 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 05:11:02,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:02,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:02,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 05:11:02,909 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 05:11:02,909 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,45681,1690089042835] are moved back to default 2023-07-23 05:11:02,909 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-23 05:11:02,909 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:02,912 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:02,912 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:02,915 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-23 05:11:02,915 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:02,917 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 05:11:02,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-23 05:11:02,920 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 05:11:02,920 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 117 2023-07-23 05:11:02,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-23 05:11:02,922 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 05:11:02,922 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 05:11:02,923 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:02,923 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:02,923 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 05:11:02,926 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 05:11:02,927 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/unmovedTable/b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:02,928 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/unmovedTable/b29871ac63cf64acce94f886ab9279a6 empty. 2023-07-23 05:11:02,928 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/unmovedTable/b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:02,928 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-23 05:11:02,954 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-23 05:11:02,955 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => b29871ac63cf64acce94f886ab9279a6, NAME => 'unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:11:02,971 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:02,972 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing b29871ac63cf64acce94f886ab9279a6, disabling compactions & flushes 2023-07-23 05:11:02,972 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:02,972 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:02,972 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. after waiting 0 ms 2023-07-23 05:11:02,972 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:02,972 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:02,972 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for b29871ac63cf64acce94f886ab9279a6: 2023-07-23 05:11:02,974 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 05:11:02,975 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690089062975"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089062975"}]},"ts":"1690089062975"} 2023-07-23 05:11:02,977 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 05:11:02,977 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 05:11:02,978 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089062978"}]},"ts":"1690089062978"} 2023-07-23 05:11:02,979 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-23 05:11:02,985 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=b29871ac63cf64acce94f886ab9279a6, ASSIGN}] 2023-07-23 05:11:02,986 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=b29871ac63cf64acce94f886ab9279a6, ASSIGN 2023-07-23 05:11:02,987 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=118, ppid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=b29871ac63cf64acce94f886ab9279a6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46173,1690089043304; forceNewPlan=false, retain=false 2023-07-23 05:11:03,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-23 05:11:03,139 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=b29871ac63cf64acce94f886ab9279a6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:11:03,139 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690089063139"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089063139"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089063139"}]},"ts":"1690089063139"} 2023-07-23 05:11:03,141 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=118, state=RUNNABLE; OpenRegionProcedure b29871ac63cf64acce94f886ab9279a6, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:11:03,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-23 05:11:03,296 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:03,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b29871ac63cf64acce94f886ab9279a6, NAME => 'unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:11:03,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:03,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:03,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:03,296 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:03,297 INFO [StoreOpener-b29871ac63cf64acce94f886ab9279a6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:03,299 DEBUG [StoreOpener-b29871ac63cf64acce94f886ab9279a6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/unmovedTable/b29871ac63cf64acce94f886ab9279a6/ut 2023-07-23 05:11:03,299 DEBUG [StoreOpener-b29871ac63cf64acce94f886ab9279a6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/unmovedTable/b29871ac63cf64acce94f886ab9279a6/ut 2023-07-23 05:11:03,299 INFO [StoreOpener-b29871ac63cf64acce94f886ab9279a6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b29871ac63cf64acce94f886ab9279a6 columnFamilyName ut 2023-07-23 05:11:03,300 INFO [StoreOpener-b29871ac63cf64acce94f886ab9279a6-1] regionserver.HStore(310): Store=b29871ac63cf64acce94f886ab9279a6/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:03,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/unmovedTable/b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:03,301 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/unmovedTable/b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:03,304 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:03,305 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/unmovedTable/b29871ac63cf64acce94f886ab9279a6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:11:03,306 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b29871ac63cf64acce94f886ab9279a6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11563171520, jitterRate=0.07690426707267761}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:11:03,306 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b29871ac63cf64acce94f886ab9279a6: 2023-07-23 05:11:03,307 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6., pid=119, masterSystemTime=1690089063292 2023-07-23 05:11:03,308 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:03,308 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:03,308 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=118 updating hbase:meta row=b29871ac63cf64acce94f886ab9279a6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:11:03,309 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690089063308"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089063308"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089063308"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089063308"}]},"ts":"1690089063308"} 2023-07-23 05:11:03,311 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=118 2023-07-23 05:11:03,311 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=118, state=SUCCESS; OpenRegionProcedure b29871ac63cf64acce94f886ab9279a6, server=jenkins-hbase4.apache.org,46173,1690089043304 in 169 msec 2023-07-23 05:11:03,313 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-23 05:11:03,313 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=b29871ac63cf64acce94f886ab9279a6, ASSIGN in 327 msec 2023-07-23 05:11:03,313 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 05:11:03,314 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089063313"}]},"ts":"1690089063313"} 2023-07-23 05:11:03,315 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-23 05:11:03,319 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=117, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 05:11:03,321 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; CreateTableProcedure table=unmovedTable in 403 msec 2023-07-23 05:11:03,421 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'unmovedTable' 2023-07-23 05:11:03,524 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=117 2023-07-23 05:11:03,524 INFO [Listener at localhost/44477] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 117 completed 2023-07-23 05:11:03,524 DEBUG [Listener at localhost/44477] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-23 05:11:03,525 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:03,528 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-23 05:11:03,529 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:03,529 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-23 05:11:03,531 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-23 05:11:03,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-23 05:11:03,533 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 05:11:03,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:03,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:03,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 05:11:03,536 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-23 05:11:03,536 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(345): Moving region b29871ac63cf64acce94f886ab9279a6 to RSGroup normal 2023-07-23 05:11:03,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=b29871ac63cf64acce94f886ab9279a6, REOPEN/MOVE 2023-07-23 05:11:03,537 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-23 05:11:03,537 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=b29871ac63cf64acce94f886ab9279a6, REOPEN/MOVE 2023-07-23 05:11:03,538 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=b29871ac63cf64acce94f886ab9279a6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:11:03,538 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690089063538"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089063538"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089063538"}]},"ts":"1690089063538"} 2023-07-23 05:11:03,540 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE; CloseRegionProcedure b29871ac63cf64acce94f886ab9279a6, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:11:03,693 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:03,694 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b29871ac63cf64acce94f886ab9279a6, disabling compactions & flushes 2023-07-23 05:11:03,694 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:03,694 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:03,694 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. after waiting 0 ms 2023-07-23 05:11:03,694 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:03,698 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/unmovedTable/b29871ac63cf64acce94f886ab9279a6/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:11:03,699 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:03,699 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b29871ac63cf64acce94f886ab9279a6: 2023-07-23 05:11:03,699 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b29871ac63cf64acce94f886ab9279a6 move to jenkins-hbase4.apache.org,45681,1690089042835 record at close sequenceid=2 2023-07-23 05:11:03,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:03,701 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=b29871ac63cf64acce94f886ab9279a6, regionState=CLOSED 2023-07-23 05:11:03,701 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690089063701"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089063701"}]},"ts":"1690089063701"} 2023-07-23 05:11:03,705 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-23 05:11:03,705 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; CloseRegionProcedure b29871ac63cf64acce94f886ab9279a6, server=jenkins-hbase4.apache.org,46173,1690089043304 in 162 msec 2023-07-23 05:11:03,705 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=b29871ac63cf64acce94f886ab9279a6, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45681,1690089042835; forceNewPlan=false, retain=false 2023-07-23 05:11:03,856 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=b29871ac63cf64acce94f886ab9279a6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:11:03,856 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690089063856"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089063856"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089063856"}]},"ts":"1690089063856"} 2023-07-23 05:11:03,858 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=120, state=RUNNABLE; OpenRegionProcedure b29871ac63cf64acce94f886ab9279a6, server=jenkins-hbase4.apache.org,45681,1690089042835}] 2023-07-23 05:11:04,015 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:04,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b29871ac63cf64acce94f886ab9279a6, NAME => 'unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:11:04,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:04,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:04,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:04,016 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:04,018 INFO [StoreOpener-b29871ac63cf64acce94f886ab9279a6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:04,019 DEBUG [StoreOpener-b29871ac63cf64acce94f886ab9279a6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/unmovedTable/b29871ac63cf64acce94f886ab9279a6/ut 2023-07-23 05:11:04,020 DEBUG [StoreOpener-b29871ac63cf64acce94f886ab9279a6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/unmovedTable/b29871ac63cf64acce94f886ab9279a6/ut 2023-07-23 05:11:04,020 INFO [StoreOpener-b29871ac63cf64acce94f886ab9279a6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b29871ac63cf64acce94f886ab9279a6 columnFamilyName ut 2023-07-23 05:11:04,021 INFO [StoreOpener-b29871ac63cf64acce94f886ab9279a6-1] regionserver.HStore(310): Store=b29871ac63cf64acce94f886ab9279a6/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:04,022 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/unmovedTable/b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:04,023 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/unmovedTable/b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:04,026 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:04,027 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b29871ac63cf64acce94f886ab9279a6; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10325524320, jitterRate=-0.038360610604286194}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:11:04,027 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b29871ac63cf64acce94f886ab9279a6: 2023-07-23 05:11:04,028 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6., pid=122, masterSystemTime=1690089064011 2023-07-23 05:11:04,030 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:04,030 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:04,030 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=120 updating hbase:meta row=b29871ac63cf64acce94f886ab9279a6, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:11:04,030 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690089064030"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089064030"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089064030"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089064030"}]},"ts":"1690089064030"} 2023-07-23 05:11:04,037 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=120 2023-07-23 05:11:04,037 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=120, state=SUCCESS; OpenRegionProcedure b29871ac63cf64acce94f886ab9279a6, server=jenkins-hbase4.apache.org,45681,1690089042835 in 174 msec 2023-07-23 05:11:04,038 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=b29871ac63cf64acce94f886ab9279a6, REOPEN/MOVE in 500 msec 2023-07-23 05:11:04,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure.ProcedureSyncWait(216): waitFor pid=120 2023-07-23 05:11:04,538 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-23 05:11:04,538 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:04,541 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:04,542 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:04,545 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:04,545 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-23 05:11:04,546 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 05:11:04,547 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-23 05:11:04,547 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:04,548 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-23 05:11:04,548 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 05:11:04,549 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-23 05:11:04,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 05:11:04,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:04,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:04,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 05:11:04,555 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-23 05:11:04,557 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-23 05:11:04,561 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:04,561 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:04,577 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-23 05:11:04,577 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:04,578 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-23 05:11:04,579 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 05:11:04,579 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-23 05:11:04,580 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 05:11:04,585 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:04,585 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:04,587 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-23 05:11:04,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 05:11:04,589 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:04,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:04,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 05:11:04,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 05:11:04,598 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-23 05:11:04,598 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(345): Moving region b29871ac63cf64acce94f886ab9279a6 to RSGroup default 2023-07-23 05:11:04,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=b29871ac63cf64acce94f886ab9279a6, REOPEN/MOVE 2023-07-23 05:11:04,599 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-23 05:11:04,599 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=b29871ac63cf64acce94f886ab9279a6, REOPEN/MOVE 2023-07-23 05:11:04,600 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=b29871ac63cf64acce94f886ab9279a6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:11:04,600 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690089064600"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089064600"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089064600"}]},"ts":"1690089064600"} 2023-07-23 05:11:04,601 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure b29871ac63cf64acce94f886ab9279a6, server=jenkins-hbase4.apache.org,45681,1690089042835}] 2023-07-23 05:11:04,754 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:04,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b29871ac63cf64acce94f886ab9279a6, disabling compactions & flushes 2023-07-23 05:11:04,756 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:04,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:04,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. after waiting 0 ms 2023-07-23 05:11:04,756 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:04,760 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/unmovedTable/b29871ac63cf64acce94f886ab9279a6/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 05:11:04,761 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:04,761 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b29871ac63cf64acce94f886ab9279a6: 2023-07-23 05:11:04,761 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b29871ac63cf64acce94f886ab9279a6 move to jenkins-hbase4.apache.org,46173,1690089043304 record at close sequenceid=5 2023-07-23 05:11:04,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:04,763 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=b29871ac63cf64acce94f886ab9279a6, regionState=CLOSED 2023-07-23 05:11:04,763 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690089064763"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089064763"}]},"ts":"1690089064763"} 2023-07-23 05:11:04,766 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-23 05:11:04,766 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure b29871ac63cf64acce94f886ab9279a6, server=jenkins-hbase4.apache.org,45681,1690089042835 in 163 msec 2023-07-23 05:11:04,767 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=b29871ac63cf64acce94f886ab9279a6, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,46173,1690089043304; forceNewPlan=false, retain=false 2023-07-23 05:11:04,917 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=b29871ac63cf64acce94f886ab9279a6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:11:04,917 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690089064917"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089064917"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089064917"}]},"ts":"1690089064917"} 2023-07-23 05:11:04,919 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure b29871ac63cf64acce94f886ab9279a6, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:11:05,075 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:05,075 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b29871ac63cf64acce94f886ab9279a6, NAME => 'unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:11:05,075 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:05,076 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:05,076 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:05,076 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:05,077 INFO [StoreOpener-b29871ac63cf64acce94f886ab9279a6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:05,079 DEBUG [StoreOpener-b29871ac63cf64acce94f886ab9279a6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/unmovedTable/b29871ac63cf64acce94f886ab9279a6/ut 2023-07-23 05:11:05,079 DEBUG [StoreOpener-b29871ac63cf64acce94f886ab9279a6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/unmovedTable/b29871ac63cf64acce94f886ab9279a6/ut 2023-07-23 05:11:05,080 INFO [StoreOpener-b29871ac63cf64acce94f886ab9279a6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b29871ac63cf64acce94f886ab9279a6 columnFamilyName ut 2023-07-23 05:11:05,080 INFO [StoreOpener-b29871ac63cf64acce94f886ab9279a6-1] regionserver.HStore(310): Store=b29871ac63cf64acce94f886ab9279a6/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:05,081 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/unmovedTable/b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:05,083 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/unmovedTable/b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:05,086 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:05,087 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b29871ac63cf64acce94f886ab9279a6; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12064465920, jitterRate=0.12359094619750977}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:11:05,087 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b29871ac63cf64acce94f886ab9279a6: 2023-07-23 05:11:05,088 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6., pid=125, masterSystemTime=1690089065071 2023-07-23 05:11:05,090 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:05,090 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:05,090 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=b29871ac63cf64acce94f886ab9279a6, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:11:05,090 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690089065090"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089065090"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089065090"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089065090"}]},"ts":"1690089065090"} 2023-07-23 05:11:05,094 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-23 05:11:05,094 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure b29871ac63cf64acce94f886ab9279a6, server=jenkins-hbase4.apache.org,46173,1690089043304 in 172 msec 2023-07-23 05:11:05,096 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=b29871ac63cf64acce94f886ab9279a6, REOPEN/MOVE in 496 msec 2023-07-23 05:11:05,343 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-23 05:11:05,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-23 05:11:05,599 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-23 05:11:05,599 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:05,600 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45681] to rsgroup default 2023-07-23 05:11:05,603 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-23 05:11:05,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:05,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:05,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 05:11:05,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 05:11:05,607 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-23 05:11:05,607 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,45681,1690089042835] are moved back to normal 2023-07-23 05:11:05,607 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-23 05:11:05,607 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:05,608 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-23 05:11:05,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:05,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:05,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 05:11:05,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-23 05:11:05,614 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:05,615 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:05,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:05,615 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:05,615 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:11:05,615 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:05,616 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:11:05,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:05,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 05:11:05,622 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 05:11:05,623 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:05,625 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-23 05:11:05,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:05,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 05:11:05,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:05,629 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-23 05:11:05,630 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(345): Moving region be20752d9b84181a034cc1472048379b to RSGroup default 2023-07-23 05:11:05,630 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=be20752d9b84181a034cc1472048379b, REOPEN/MOVE 2023-07-23 05:11:05,631 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-23 05:11:05,631 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=be20752d9b84181a034cc1472048379b, REOPEN/MOVE 2023-07-23 05:11:05,631 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=be20752d9b84181a034cc1472048379b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:11:05,632 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690089061244.be20752d9b84181a034cc1472048379b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690089065631"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089065631"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089065631"}]},"ts":"1690089065631"} 2023-07-23 05:11:05,633 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure be20752d9b84181a034cc1472048379b, server=jenkins-hbase4.apache.org,41981,1690089047062}] 2023-07-23 05:11:05,786 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close be20752d9b84181a034cc1472048379b 2023-07-23 05:11:05,787 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing be20752d9b84181a034cc1472048379b, disabling compactions & flushes 2023-07-23 05:11:05,787 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:05,787 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:05,787 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690089061244.be20752d9b84181a034cc1472048379b. after waiting 0 ms 2023-07-23 05:11:05,787 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:05,792 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/testRename/be20752d9b84181a034cc1472048379b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-23 05:11:05,793 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:05,793 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for be20752d9b84181a034cc1472048379b: 2023-07-23 05:11:05,793 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding be20752d9b84181a034cc1472048379b move to jenkins-hbase4.apache.org,45681,1690089042835 record at close sequenceid=5 2023-07-23 05:11:05,795 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed be20752d9b84181a034cc1472048379b 2023-07-23 05:11:05,795 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=be20752d9b84181a034cc1472048379b, regionState=CLOSED 2023-07-23 05:11:05,795 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690089061244.be20752d9b84181a034cc1472048379b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690089065795"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089065795"}]},"ts":"1690089065795"} 2023-07-23 05:11:05,800 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-23 05:11:05,800 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure be20752d9b84181a034cc1472048379b, server=jenkins-hbase4.apache.org,41981,1690089047062 in 166 msec 2023-07-23 05:11:05,801 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=be20752d9b84181a034cc1472048379b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45681,1690089042835; forceNewPlan=false, retain=false 2023-07-23 05:11:05,951 INFO [jenkins-hbase4:37433] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 05:11:05,951 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=be20752d9b84181a034cc1472048379b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:11:05,952 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690089061244.be20752d9b84181a034cc1472048379b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690089065951"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089065951"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089065951"}]},"ts":"1690089065951"} 2023-07-23 05:11:05,953 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure be20752d9b84181a034cc1472048379b, server=jenkins-hbase4.apache.org,45681,1690089042835}] 2023-07-23 05:11:06,108 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:06,109 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => be20752d9b84181a034cc1472048379b, NAME => 'testRename,,1690089061244.be20752d9b84181a034cc1472048379b.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:11:06,109 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename be20752d9b84181a034cc1472048379b 2023-07-23 05:11:06,109 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690089061244.be20752d9b84181a034cc1472048379b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:06,109 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for be20752d9b84181a034cc1472048379b 2023-07-23 05:11:06,109 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for be20752d9b84181a034cc1472048379b 2023-07-23 05:11:06,110 INFO [StoreOpener-be20752d9b84181a034cc1472048379b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region be20752d9b84181a034cc1472048379b 2023-07-23 05:11:06,111 DEBUG [StoreOpener-be20752d9b84181a034cc1472048379b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/testRename/be20752d9b84181a034cc1472048379b/tr 2023-07-23 05:11:06,111 DEBUG [StoreOpener-be20752d9b84181a034cc1472048379b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/testRename/be20752d9b84181a034cc1472048379b/tr 2023-07-23 05:11:06,112 INFO [StoreOpener-be20752d9b84181a034cc1472048379b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region be20752d9b84181a034cc1472048379b columnFamilyName tr 2023-07-23 05:11:06,112 INFO [StoreOpener-be20752d9b84181a034cc1472048379b-1] regionserver.HStore(310): Store=be20752d9b84181a034cc1472048379b/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:06,113 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/testRename/be20752d9b84181a034cc1472048379b 2023-07-23 05:11:06,114 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/testRename/be20752d9b84181a034cc1472048379b 2023-07-23 05:11:06,116 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for be20752d9b84181a034cc1472048379b 2023-07-23 05:11:06,117 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened be20752d9b84181a034cc1472048379b; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11235027840, jitterRate=0.04634350538253784}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:11:06,117 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for be20752d9b84181a034cc1472048379b: 2023-07-23 05:11:06,118 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690089061244.be20752d9b84181a034cc1472048379b., pid=128, masterSystemTime=1690089066105 2023-07-23 05:11:06,119 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:06,119 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:06,119 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=be20752d9b84181a034cc1472048379b, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:11:06,120 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690089061244.be20752d9b84181a034cc1472048379b.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690089066119"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089066119"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089066119"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089066119"}]},"ts":"1690089066119"} 2023-07-23 05:11:06,122 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-23 05:11:06,122 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure be20752d9b84181a034cc1472048379b, server=jenkins-hbase4.apache.org,45681,1690089042835 in 168 msec 2023-07-23 05:11:06,123 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=be20752d9b84181a034cc1472048379b, REOPEN/MOVE in 492 msec 2023-07-23 05:11:06,631 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-23 05:11:06,631 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-23 05:11:06,631 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:06,633 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981] to rsgroup default 2023-07-23 05:11:06,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:06,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-23 05:11:06,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:06,639 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-23 05:11:06,639 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1690089043078, jenkins-hbase4.apache.org,41981,1690089047062] are moved back to newgroup 2023-07-23 05:11:06,639 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-23 05:11:06,639 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:06,640 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-23 05:11:06,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:06,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:11:06,645 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:06,648 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:11:06,648 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:11:06,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:06,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:06,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:06,658 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:06,661 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:06,661 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:06,663 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37433] to rsgroup master 2023-07-23 05:11:06,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:06,663 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 760 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39966 deadline: 1690090266663, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. 2023-07-23 05:11:06,663 WARN [Listener at localhost/44477] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:11:06,665 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:06,665 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:06,665 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:06,666 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981, jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:46173], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:11:06,666 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:06,666 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:06,685 INFO [Listener at localhost/44477] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=506 (was 512), OpenFileDescriptor=775 (was 815), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=508 (was 534), ProcessCount=179 (was 177) - ProcessCount LEAK? -, AvailableMemoryMB=6355 (was 6356) 2023-07-23 05:11:06,686 WARN [Listener at localhost/44477] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-23 05:11:06,702 INFO [Listener at localhost/44477] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=506, OpenFileDescriptor=775, MaxFileDescriptor=60000, SystemLoadAverage=508, ProcessCount=179, AvailableMemoryMB=6352 2023-07-23 05:11:06,702 WARN [Listener at localhost/44477] hbase.ResourceChecker(130): Thread=506 is superior to 500 2023-07-23 05:11:06,702 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-23 05:11:06,706 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:06,706 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:06,707 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:06,707 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:06,707 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:06,708 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:11:06,708 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:06,708 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:11:06,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:06,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:11:06,714 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:06,716 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:11:06,717 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:11:06,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:06,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:06,722 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:06,723 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:06,725 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:06,726 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:06,727 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37433] to rsgroup master 2023-07-23 05:11:06,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:06,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 788 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39966 deadline: 1690090266727, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. 2023-07-23 05:11:06,728 WARN [Listener at localhost/44477] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:11:06,730 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:06,730 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:06,730 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:06,731 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981, jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:46173], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:11:06,731 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:06,731 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:06,732 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-23 05:11:06,732 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 05:11:06,752 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-23 05:11:06,752 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-23 05:11:06,757 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-23 05:11:06,757 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:06,758 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-23 05:11:06,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:06,758 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 800 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:39966 deadline: 1690090266758, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-23 05:11:06,762 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-23 05:11:06,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:06,762 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 803 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:39966 deadline: 1690090266762, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-23 05:11:06,765 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-23 05:11:06,766 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-23 05:11:06,771 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-23 05:11:06,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:06,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 807 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:39966 deadline: 1690090266770, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-23 05:11:06,775 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:06,775 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:06,776 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:06,776 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:06,776 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:06,777 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:11:06,777 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:06,777 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:11:06,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:06,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:11:06,783 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:06,785 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:11:06,785 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:11:06,787 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:06,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:06,789 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:06,791 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:06,793 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:06,793 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:06,795 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37433] to rsgroup master 2023-07-23 05:11:06,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:06,798 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 831 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39966 deadline: 1690090266795, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. 2023-07-23 05:11:06,799 WARN [Listener at localhost/44477] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:11:06,800 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:06,801 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:06,801 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:06,801 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981, jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:46173], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:11:06,802 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:06,802 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:06,821 INFO [Listener at localhost/44477] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=510 (was 506) Potentially hanging thread: hconnection-0x2db71259-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x42e89c26-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x42e89c26-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2db71259-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=775 (was 775), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=508 (was 508), ProcessCount=179 (was 179), AvailableMemoryMB=6357 (was 6352) - AvailableMemoryMB LEAK? - 2023-07-23 05:11:06,822 WARN [Listener at localhost/44477] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-23 05:11:06,843 INFO [Listener at localhost/44477] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=510, OpenFileDescriptor=775, MaxFileDescriptor=60000, SystemLoadAverage=508, ProcessCount=179, AvailableMemoryMB=6357 2023-07-23 05:11:06,843 WARN [Listener at localhost/44477] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-23 05:11:06,843 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-23 05:11:06,848 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:06,848 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:06,849 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:06,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:06,849 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:06,850 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:11:06,851 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:06,851 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:11:06,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:06,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:11:06,858 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:06,861 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:11:06,862 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:11:06,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:06,864 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:06,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:06,868 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:06,871 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:06,871 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:06,872 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37433] to rsgroup master 2023-07-23 05:11:06,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:06,873 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 859 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39966 deadline: 1690090266872, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. 2023-07-23 05:11:06,873 WARN [Listener at localhost/44477] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:11:06,875 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:06,875 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:06,876 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:06,876 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981, jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:46173], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:11:06,876 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:06,877 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:06,877 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:06,877 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:06,878 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_123733219 2023-07-23 05:11:06,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:06,883 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_123733219 2023-07-23 05:11:06,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:06,888 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:11:06,890 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:06,893 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:06,893 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:06,896 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981] to rsgroup Group_testDisabledTableMove_123733219 2023-07-23 05:11:06,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:06,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_123733219 2023-07-23 05:11:06,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:06,899 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:11:06,900 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-23 05:11:06,900 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1690089043078, jenkins-hbase4.apache.org,41981,1690089047062] are moved back to default 2023-07-23 05:11:06,900 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_123733219 2023-07-23 05:11:06,900 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:06,903 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:06,903 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:06,906 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_123733219 2023-07-23 05:11:06,906 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:06,908 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 05:11:06,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-23 05:11:06,911 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 05:11:06,911 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 129 2023-07-23 05:11:06,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-23 05:11:06,913 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:06,913 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_123733219 2023-07-23 05:11:06,914 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:06,914 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:11:06,916 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 05:11:06,920 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/df50ed707009fa65a048d64b94129310 2023-07-23 05:11:06,920 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/48aab242cc2ac5d1dbb2630462bf0304 2023-07-23 05:11:06,920 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/bb404ff7818a3f426e242c4b796057e0 2023-07-23 05:11:06,920 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/086f86ff85ef955a0f08b856915ee8c4 2023-07-23 05:11:06,920 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/013589b91f8cbbdbca6891e824725349 2023-07-23 05:11:06,921 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/df50ed707009fa65a048d64b94129310 empty. 2023-07-23 05:11:06,921 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/013589b91f8cbbdbca6891e824725349 empty. 2023-07-23 05:11:06,921 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/086f86ff85ef955a0f08b856915ee8c4 empty. 2023-07-23 05:11:06,921 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/bb404ff7818a3f426e242c4b796057e0 empty. 2023-07-23 05:11:06,922 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/48aab242cc2ac5d1dbb2630462bf0304 empty. 2023-07-23 05:11:06,922 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/df50ed707009fa65a048d64b94129310 2023-07-23 05:11:06,922 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/086f86ff85ef955a0f08b856915ee8c4 2023-07-23 05:11:06,922 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/bb404ff7818a3f426e242c4b796057e0 2023-07-23 05:11:06,922 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/48aab242cc2ac5d1dbb2630462bf0304 2023-07-23 05:11:06,923 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/013589b91f8cbbdbca6891e824725349 2023-07-23 05:11:06,923 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-23 05:11:06,947 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-23 05:11:06,951 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 48aab242cc2ac5d1dbb2630462bf0304, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:11:06,951 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 013589b91f8cbbdbca6891e824725349, NAME => 'Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:11:06,951 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => df50ed707009fa65a048d64b94129310, NAME => 'Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:11:07,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-23 05:11:07,032 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:07,032 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 48aab242cc2ac5d1dbb2630462bf0304, disabling compactions & flushes 2023-07-23 05:11:07,032 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:07,032 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304. 2023-07-23 05:11:07,032 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing df50ed707009fa65a048d64b94129310, disabling compactions & flushes 2023-07-23 05:11:07,032 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304. 2023-07-23 05:11:07,032 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310. 2023-07-23 05:11:07,033 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304. after waiting 0 ms 2023-07-23 05:11:07,033 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310. 2023-07-23 05:11:07,033 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304. 2023-07-23 05:11:07,033 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310. after waiting 0 ms 2023-07-23 05:11:07,033 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310. 2023-07-23 05:11:07,033 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304. 2023-07-23 05:11:07,033 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310. 2023-07-23 05:11:07,033 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for df50ed707009fa65a048d64b94129310: 2023-07-23 05:11:07,033 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 48aab242cc2ac5d1dbb2630462bf0304: 2023-07-23 05:11:07,034 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 086f86ff85ef955a0f08b856915ee8c4, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:11:07,035 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => bb404ff7818a3f426e242c4b796057e0, NAME => 'Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp 2023-07-23 05:11:07,037 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:07,037 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 013589b91f8cbbdbca6891e824725349, disabling compactions & flushes 2023-07-23 05:11:07,037 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349. 2023-07-23 05:11:07,038 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349. 2023-07-23 05:11:07,038 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349. after waiting 0 ms 2023-07-23 05:11:07,038 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349. 2023-07-23 05:11:07,038 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349. 2023-07-23 05:11:07,038 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 013589b91f8cbbdbca6891e824725349: 2023-07-23 05:11:07,068 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:07,068 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing bb404ff7818a3f426e242c4b796057e0, disabling compactions & flushes 2023-07-23 05:11:07,068 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0. 2023-07-23 05:11:07,068 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0. 2023-07-23 05:11:07,068 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0. after waiting 0 ms 2023-07-23 05:11:07,068 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0. 2023-07-23 05:11:07,068 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0. 2023-07-23 05:11:07,068 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for bb404ff7818a3f426e242c4b796057e0: 2023-07-23 05:11:07,072 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:07,072 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 086f86ff85ef955a0f08b856915ee8c4, disabling compactions & flushes 2023-07-23 05:11:07,072 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4. 2023-07-23 05:11:07,072 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4. 2023-07-23 05:11:07,072 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4. after waiting 0 ms 2023-07-23 05:11:07,072 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4. 2023-07-23 05:11:07,072 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4. 2023-07-23 05:11:07,072 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 086f86ff85ef955a0f08b856915ee8c4: 2023-07-23 05:11:07,075 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 05:11:07,076 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690089067076"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089067076"}]},"ts":"1690089067076"} 2023-07-23 05:11:07,076 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690089067076"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089067076"}]},"ts":"1690089067076"} 2023-07-23 05:11:07,076 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690089067076"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089067076"}]},"ts":"1690089067076"} 2023-07-23 05:11:07,076 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690089067076"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089067076"}]},"ts":"1690089067076"} 2023-07-23 05:11:07,076 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690089067076"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089067076"}]},"ts":"1690089067076"} 2023-07-23 05:11:07,079 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-23 05:11:07,079 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 05:11:07,080 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089067080"}]},"ts":"1690089067080"} 2023-07-23 05:11:07,081 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-23 05:11:07,084 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:11:07,085 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:11:07,085 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:11:07,085 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:11:07,085 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=df50ed707009fa65a048d64b94129310, ASSIGN}, {pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=013589b91f8cbbdbca6891e824725349, ASSIGN}, {pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=48aab242cc2ac5d1dbb2630462bf0304, ASSIGN}, {pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=086f86ff85ef955a0f08b856915ee8c4, ASSIGN}, {pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bb404ff7818a3f426e242c4b796057e0, ASSIGN}] 2023-07-23 05:11:07,087 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=086f86ff85ef955a0f08b856915ee8c4, ASSIGN 2023-07-23 05:11:07,087 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=013589b91f8cbbdbca6891e824725349, ASSIGN 2023-07-23 05:11:07,087 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=df50ed707009fa65a048d64b94129310, ASSIGN 2023-07-23 05:11:07,088 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=48aab242cc2ac5d1dbb2630462bf0304, ASSIGN 2023-07-23 05:11:07,088 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bb404ff7818a3f426e242c4b796057e0, ASSIGN 2023-07-23 05:11:07,088 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=086f86ff85ef955a0f08b856915ee8c4, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45681,1690089042835; forceNewPlan=false, retain=false 2023-07-23 05:11:07,088 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=131, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=013589b91f8cbbdbca6891e824725349, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45681,1690089042835; forceNewPlan=false, retain=false 2023-07-23 05:11:07,088 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=130, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=df50ed707009fa65a048d64b94129310, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45681,1690089042835; forceNewPlan=false, retain=false 2023-07-23 05:11:07,089 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=132, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=48aab242cc2ac5d1dbb2630462bf0304, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46173,1690089043304; forceNewPlan=false, retain=false 2023-07-23 05:11:07,089 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bb404ff7818a3f426e242c4b796057e0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46173,1690089043304; forceNewPlan=false, retain=false 2023-07-23 05:11:07,138 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-23 05:11:07,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-23 05:11:07,239 INFO [jenkins-hbase4:37433] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-23 05:11:07,242 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=bb404ff7818a3f426e242c4b796057e0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:11:07,242 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=df50ed707009fa65a048d64b94129310, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:11:07,242 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=086f86ff85ef955a0f08b856915ee8c4, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:11:07,243 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690089067242"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089067242"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089067242"}]},"ts":"1690089067242"} 2023-07-23 05:11:07,242 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=48aab242cc2ac5d1dbb2630462bf0304, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:11:07,242 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=013589b91f8cbbdbca6891e824725349, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:11:07,243 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690089067242"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089067242"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089067242"}]},"ts":"1690089067242"} 2023-07-23 05:11:07,243 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690089067242"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089067242"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089067242"}]},"ts":"1690089067242"} 2023-07-23 05:11:07,243 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690089067242"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089067242"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089067242"}]},"ts":"1690089067242"} 2023-07-23 05:11:07,243 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690089067242"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089067242"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089067242"}]},"ts":"1690089067242"} 2023-07-23 05:11:07,245 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=135, ppid=130, state=RUNNABLE; OpenRegionProcedure df50ed707009fa65a048d64b94129310, server=jenkins-hbase4.apache.org,45681,1690089042835}] 2023-07-23 05:11:07,246 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=136, ppid=132, state=RUNNABLE; OpenRegionProcedure 48aab242cc2ac5d1dbb2630462bf0304, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:11:07,251 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=137, ppid=131, state=RUNNABLE; OpenRegionProcedure 013589b91f8cbbdbca6891e824725349, server=jenkins-hbase4.apache.org,45681,1690089042835}] 2023-07-23 05:11:07,252 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=133, state=RUNNABLE; OpenRegionProcedure 086f86ff85ef955a0f08b856915ee8c4, server=jenkins-hbase4.apache.org,45681,1690089042835}] 2023-07-23 05:11:07,252 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=134, state=RUNNABLE; OpenRegionProcedure bb404ff7818a3f426e242c4b796057e0, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:11:07,404 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304. 2023-07-23 05:11:07,404 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310. 2023-07-23 05:11:07,404 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 48aab242cc2ac5d1dbb2630462bf0304, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-23 05:11:07,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => df50ed707009fa65a048d64b94129310, NAME => 'Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-23 05:11:07,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 48aab242cc2ac5d1dbb2630462bf0304 2023-07-23 05:11:07,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove df50ed707009fa65a048d64b94129310 2023-07-23 05:11:07,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:07,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:07,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 48aab242cc2ac5d1dbb2630462bf0304 2023-07-23 05:11:07,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for df50ed707009fa65a048d64b94129310 2023-07-23 05:11:07,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for df50ed707009fa65a048d64b94129310 2023-07-23 05:11:07,405 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 48aab242cc2ac5d1dbb2630462bf0304 2023-07-23 05:11:07,407 INFO [StoreOpener-df50ed707009fa65a048d64b94129310-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region df50ed707009fa65a048d64b94129310 2023-07-23 05:11:07,407 INFO [StoreOpener-48aab242cc2ac5d1dbb2630462bf0304-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 48aab242cc2ac5d1dbb2630462bf0304 2023-07-23 05:11:07,408 DEBUG [StoreOpener-df50ed707009fa65a048d64b94129310-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/df50ed707009fa65a048d64b94129310/f 2023-07-23 05:11:07,408 DEBUG [StoreOpener-df50ed707009fa65a048d64b94129310-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/df50ed707009fa65a048d64b94129310/f 2023-07-23 05:11:07,409 DEBUG [StoreOpener-48aab242cc2ac5d1dbb2630462bf0304-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/48aab242cc2ac5d1dbb2630462bf0304/f 2023-07-23 05:11:07,409 DEBUG [StoreOpener-48aab242cc2ac5d1dbb2630462bf0304-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/48aab242cc2ac5d1dbb2630462bf0304/f 2023-07-23 05:11:07,409 INFO [StoreOpener-df50ed707009fa65a048d64b94129310-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region df50ed707009fa65a048d64b94129310 columnFamilyName f 2023-07-23 05:11:07,409 INFO [StoreOpener-48aab242cc2ac5d1dbb2630462bf0304-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 48aab242cc2ac5d1dbb2630462bf0304 columnFamilyName f 2023-07-23 05:11:07,409 INFO [StoreOpener-df50ed707009fa65a048d64b94129310-1] regionserver.HStore(310): Store=df50ed707009fa65a048d64b94129310/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:07,410 INFO [StoreOpener-48aab242cc2ac5d1dbb2630462bf0304-1] regionserver.HStore(310): Store=48aab242cc2ac5d1dbb2630462bf0304/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:07,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/df50ed707009fa65a048d64b94129310 2023-07-23 05:11:07,410 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/48aab242cc2ac5d1dbb2630462bf0304 2023-07-23 05:11:07,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/df50ed707009fa65a048d64b94129310 2023-07-23 05:11:07,411 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/48aab242cc2ac5d1dbb2630462bf0304 2023-07-23 05:11:07,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for df50ed707009fa65a048d64b94129310 2023-07-23 05:11:07,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 48aab242cc2ac5d1dbb2630462bf0304 2023-07-23 05:11:07,418 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/48aab242cc2ac5d1dbb2630462bf0304/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:11:07,418 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/df50ed707009fa65a048d64b94129310/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:11:07,418 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 48aab242cc2ac5d1dbb2630462bf0304; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9639870720, jitterRate=-0.10221707820892334}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:11:07,418 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 48aab242cc2ac5d1dbb2630462bf0304: 2023-07-23 05:11:07,418 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened df50ed707009fa65a048d64b94129310; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10825072320, jitterRate=0.008163422346115112}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:11:07,418 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for df50ed707009fa65a048d64b94129310: 2023-07-23 05:11:07,419 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310., pid=135, masterSystemTime=1690089067400 2023-07-23 05:11:07,419 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304., pid=136, masterSystemTime=1690089067400 2023-07-23 05:11:07,420 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310. 2023-07-23 05:11:07,420 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310. 2023-07-23 05:11:07,421 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349. 2023-07-23 05:11:07,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 013589b91f8cbbdbca6891e824725349, NAME => 'Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-23 05:11:07,421 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=130 updating hbase:meta row=df50ed707009fa65a048d64b94129310, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:11:07,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 013589b91f8cbbdbca6891e824725349 2023-07-23 05:11:07,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:07,421 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690089067421"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089067421"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089067421"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089067421"}]},"ts":"1690089067421"} 2023-07-23 05:11:07,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304. 2023-07-23 05:11:07,421 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304. 2023-07-23 05:11:07,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 013589b91f8cbbdbca6891e824725349 2023-07-23 05:11:07,421 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0. 2023-07-23 05:11:07,421 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 013589b91f8cbbdbca6891e824725349 2023-07-23 05:11:07,421 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=132 updating hbase:meta row=48aab242cc2ac5d1dbb2630462bf0304, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:11:07,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => bb404ff7818a3f426e242c4b796057e0, NAME => 'Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-23 05:11:07,422 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690089067421"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089067421"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089067421"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089067421"}]},"ts":"1690089067421"} 2023-07-23 05:11:07,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove bb404ff7818a3f426e242c4b796057e0 2023-07-23 05:11:07,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:07,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for bb404ff7818a3f426e242c4b796057e0 2023-07-23 05:11:07,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for bb404ff7818a3f426e242c4b796057e0 2023-07-23 05:11:07,423 INFO [StoreOpener-013589b91f8cbbdbca6891e824725349-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 013589b91f8cbbdbca6891e824725349 2023-07-23 05:11:07,424 INFO [StoreOpener-bb404ff7818a3f426e242c4b796057e0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region bb404ff7818a3f426e242c4b796057e0 2023-07-23 05:11:07,426 DEBUG [StoreOpener-013589b91f8cbbdbca6891e824725349-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/013589b91f8cbbdbca6891e824725349/f 2023-07-23 05:11:07,426 DEBUG [StoreOpener-013589b91f8cbbdbca6891e824725349-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/013589b91f8cbbdbca6891e824725349/f 2023-07-23 05:11:07,426 DEBUG [StoreOpener-bb404ff7818a3f426e242c4b796057e0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/bb404ff7818a3f426e242c4b796057e0/f 2023-07-23 05:11:07,426 INFO [StoreOpener-013589b91f8cbbdbca6891e824725349-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 013589b91f8cbbdbca6891e824725349 columnFamilyName f 2023-07-23 05:11:07,426 DEBUG [StoreOpener-bb404ff7818a3f426e242c4b796057e0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/bb404ff7818a3f426e242c4b796057e0/f 2023-07-23 05:11:07,427 INFO [StoreOpener-bb404ff7818a3f426e242c4b796057e0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region bb404ff7818a3f426e242c4b796057e0 columnFamilyName f 2023-07-23 05:11:07,427 INFO [StoreOpener-013589b91f8cbbdbca6891e824725349-1] regionserver.HStore(310): Store=013589b91f8cbbdbca6891e824725349/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:07,427 INFO [StoreOpener-bb404ff7818a3f426e242c4b796057e0-1] regionserver.HStore(310): Store=bb404ff7818a3f426e242c4b796057e0/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:07,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/013589b91f8cbbdbca6891e824725349 2023-07-23 05:11:07,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/bb404ff7818a3f426e242c4b796057e0 2023-07-23 05:11:07,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/bb404ff7818a3f426e242c4b796057e0 2023-07-23 05:11:07,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/013589b91f8cbbdbca6891e824725349 2023-07-23 05:11:07,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for bb404ff7818a3f426e242c4b796057e0 2023-07-23 05:11:07,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 013589b91f8cbbdbca6891e824725349 2023-07-23 05:11:07,433 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=135, resume processing ppid=130 2023-07-23 05:11:07,433 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=130, state=SUCCESS; OpenRegionProcedure df50ed707009fa65a048d64b94129310, server=jenkins-hbase4.apache.org,45681,1690089042835 in 178 msec 2023-07-23 05:11:07,434 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=136, resume processing ppid=132 2023-07-23 05:11:07,434 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; OpenRegionProcedure 48aab242cc2ac5d1dbb2630462bf0304, server=jenkins-hbase4.apache.org,46173,1690089043304 in 177 msec 2023-07-23 05:11:07,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/bb404ff7818a3f426e242c4b796057e0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:11:07,435 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=df50ed707009fa65a048d64b94129310, ASSIGN in 348 msec 2023-07-23 05:11:07,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/013589b91f8cbbdbca6891e824725349/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:11:07,435 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened bb404ff7818a3f426e242c4b796057e0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9942760640, jitterRate=-0.07400825619697571}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:11:07,435 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=132, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=48aab242cc2ac5d1dbb2630462bf0304, ASSIGN in 349 msec 2023-07-23 05:11:07,435 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for bb404ff7818a3f426e242c4b796057e0: 2023-07-23 05:11:07,436 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 013589b91f8cbbdbca6891e824725349; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11624929120, jitterRate=0.0826558917760849}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:11:07,436 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 013589b91f8cbbdbca6891e824725349: 2023-07-23 05:11:07,436 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0., pid=139, masterSystemTime=1690089067400 2023-07-23 05:11:07,436 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349., pid=137, masterSystemTime=1690089067400 2023-07-23 05:11:07,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0. 2023-07-23 05:11:07,438 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0. 2023-07-23 05:11:07,438 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=bb404ff7818a3f426e242c4b796057e0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:11:07,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349. 2023-07-23 05:11:07,438 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690089067438"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089067438"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089067438"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089067438"}]},"ts":"1690089067438"} 2023-07-23 05:11:07,438 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349. 2023-07-23 05:11:07,438 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4. 2023-07-23 05:11:07,438 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=131 updating hbase:meta row=013589b91f8cbbdbca6891e824725349, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:11:07,438 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 086f86ff85ef955a0f08b856915ee8c4, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-23 05:11:07,438 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690089067438"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089067438"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089067438"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089067438"}]},"ts":"1690089067438"} 2023-07-23 05:11:07,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 086f86ff85ef955a0f08b856915ee8c4 2023-07-23 05:11:07,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:07,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 086f86ff85ef955a0f08b856915ee8c4 2023-07-23 05:11:07,439 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 086f86ff85ef955a0f08b856915ee8c4 2023-07-23 05:11:07,440 INFO [StoreOpener-086f86ff85ef955a0f08b856915ee8c4-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 086f86ff85ef955a0f08b856915ee8c4 2023-07-23 05:11:07,441 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=134 2023-07-23 05:11:07,441 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=134, state=SUCCESS; OpenRegionProcedure bb404ff7818a3f426e242c4b796057e0, server=jenkins-hbase4.apache.org,46173,1690089043304 in 187 msec 2023-07-23 05:11:07,441 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=131 2023-07-23 05:11:07,441 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=131, state=SUCCESS; OpenRegionProcedure 013589b91f8cbbdbca6891e824725349, server=jenkins-hbase4.apache.org,45681,1690089042835 in 189 msec 2023-07-23 05:11:07,442 DEBUG [StoreOpener-086f86ff85ef955a0f08b856915ee8c4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/086f86ff85ef955a0f08b856915ee8c4/f 2023-07-23 05:11:07,442 DEBUG [StoreOpener-086f86ff85ef955a0f08b856915ee8c4-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/086f86ff85ef955a0f08b856915ee8c4/f 2023-07-23 05:11:07,442 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bb404ff7818a3f426e242c4b796057e0, ASSIGN in 356 msec 2023-07-23 05:11:07,442 INFO [StoreOpener-086f86ff85ef955a0f08b856915ee8c4-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 086f86ff85ef955a0f08b856915ee8c4 columnFamilyName f 2023-07-23 05:11:07,443 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=013589b91f8cbbdbca6891e824725349, ASSIGN in 356 msec 2023-07-23 05:11:07,443 INFO [StoreOpener-086f86ff85ef955a0f08b856915ee8c4-1] regionserver.HStore(310): Store=086f86ff85ef955a0f08b856915ee8c4/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:07,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/086f86ff85ef955a0f08b856915ee8c4 2023-07-23 05:11:07,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/086f86ff85ef955a0f08b856915ee8c4 2023-07-23 05:11:07,447 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 086f86ff85ef955a0f08b856915ee8c4 2023-07-23 05:11:07,449 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/086f86ff85ef955a0f08b856915ee8c4/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:11:07,450 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 086f86ff85ef955a0f08b856915ee8c4; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10828539200, jitterRate=0.008486300706863403}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:11:07,450 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 086f86ff85ef955a0f08b856915ee8c4: 2023-07-23 05:11:07,451 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4., pid=138, masterSystemTime=1690089067400 2023-07-23 05:11:07,452 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4. 2023-07-23 05:11:07,452 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4. 2023-07-23 05:11:07,452 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=086f86ff85ef955a0f08b856915ee8c4, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:11:07,452 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690089067452"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089067452"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089067452"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089067452"}]},"ts":"1690089067452"} 2023-07-23 05:11:07,455 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=133 2023-07-23 05:11:07,455 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=133, state=SUCCESS; OpenRegionProcedure 086f86ff85ef955a0f08b856915ee8c4, server=jenkins-hbase4.apache.org,45681,1690089042835 in 201 msec 2023-07-23 05:11:07,456 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=133, resume processing ppid=129 2023-07-23 05:11:07,456 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=129, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=086f86ff85ef955a0f08b856915ee8c4, ASSIGN in 370 msec 2023-07-23 05:11:07,457 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 05:11:07,457 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089067457"}]},"ts":"1690089067457"} 2023-07-23 05:11:07,458 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-23 05:11:07,460 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=129, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 05:11:07,461 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 552 msec 2023-07-23 05:11:07,516 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=129 2023-07-23 05:11:07,516 INFO [Listener at localhost/44477] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 129 completed 2023-07-23 05:11:07,516 DEBUG [Listener at localhost/44477] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-23 05:11:07,516 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:07,521 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-23 05:11:07,521 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:07,521 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-23 05:11:07,521 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:07,527 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-23 05:11:07,527 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 05:11:07,528 INFO [Listener at localhost/44477] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-23 05:11:07,528 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-23 05:11:07,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=140, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-23 05:11:07,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-23 05:11:07,532 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089067532"}]},"ts":"1690089067532"} 2023-07-23 05:11:07,533 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-23 05:11:07,534 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-23 05:11:07,535 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=df50ed707009fa65a048d64b94129310, UNASSIGN}, {pid=142, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=013589b91f8cbbdbca6891e824725349, UNASSIGN}, {pid=143, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=48aab242cc2ac5d1dbb2630462bf0304, UNASSIGN}, {pid=144, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=086f86ff85ef955a0f08b856915ee8c4, UNASSIGN}, {pid=145, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bb404ff7818a3f426e242c4b796057e0, UNASSIGN}] 2023-07-23 05:11:07,536 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=086f86ff85ef955a0f08b856915ee8c4, UNASSIGN 2023-07-23 05:11:07,537 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bb404ff7818a3f426e242c4b796057e0, UNASSIGN 2023-07-23 05:11:07,537 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=143, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=48aab242cc2ac5d1dbb2630462bf0304, UNASSIGN 2023-07-23 05:11:07,537 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=142, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=013589b91f8cbbdbca6891e824725349, UNASSIGN 2023-07-23 05:11:07,537 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=141, ppid=140, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=df50ed707009fa65a048d64b94129310, UNASSIGN 2023-07-23 05:11:07,537 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=086f86ff85ef955a0f08b856915ee8c4, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:11:07,538 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690089067537"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089067537"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089067537"}]},"ts":"1690089067537"} 2023-07-23 05:11:07,538 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=bb404ff7818a3f426e242c4b796057e0, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:11:07,538 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=df50ed707009fa65a048d64b94129310, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:11:07,538 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=013589b91f8cbbdbca6891e824725349, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:11:07,538 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690089067538"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089067538"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089067538"}]},"ts":"1690089067538"} 2023-07-23 05:11:07,538 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690089067538"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089067538"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089067538"}]},"ts":"1690089067538"} 2023-07-23 05:11:07,538 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=48aab242cc2ac5d1dbb2630462bf0304, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:11:07,538 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690089067538"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089067538"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089067538"}]},"ts":"1690089067538"} 2023-07-23 05:11:07,538 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690089067538"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089067538"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089067538"}]},"ts":"1690089067538"} 2023-07-23 05:11:07,539 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=146, ppid=144, state=RUNNABLE; CloseRegionProcedure 086f86ff85ef955a0f08b856915ee8c4, server=jenkins-hbase4.apache.org,45681,1690089042835}] 2023-07-23 05:11:07,540 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=147, ppid=141, state=RUNNABLE; CloseRegionProcedure df50ed707009fa65a048d64b94129310, server=jenkins-hbase4.apache.org,45681,1690089042835}] 2023-07-23 05:11:07,540 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=148, ppid=142, state=RUNNABLE; CloseRegionProcedure 013589b91f8cbbdbca6891e824725349, server=jenkins-hbase4.apache.org,45681,1690089042835}] 2023-07-23 05:11:07,541 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=145, state=RUNNABLE; CloseRegionProcedure bb404ff7818a3f426e242c4b796057e0, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:11:07,541 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=143, state=RUNNABLE; CloseRegionProcedure 48aab242cc2ac5d1dbb2630462bf0304, server=jenkins-hbase4.apache.org,46173,1690089043304}] 2023-07-23 05:11:07,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-23 05:11:07,691 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 013589b91f8cbbdbca6891e824725349 2023-07-23 05:11:07,693 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 013589b91f8cbbdbca6891e824725349, disabling compactions & flushes 2023-07-23 05:11:07,693 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349. 2023-07-23 05:11:07,693 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349. 2023-07-23 05:11:07,693 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349. after waiting 0 ms 2023-07-23 05:11:07,693 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349. 2023-07-23 05:11:07,693 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close bb404ff7818a3f426e242c4b796057e0 2023-07-23 05:11:07,695 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing bb404ff7818a3f426e242c4b796057e0, disabling compactions & flushes 2023-07-23 05:11:07,695 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0. 2023-07-23 05:11:07,695 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0. 2023-07-23 05:11:07,695 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0. after waiting 0 ms 2023-07-23 05:11:07,695 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0. 2023-07-23 05:11:07,700 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/013589b91f8cbbdbca6891e824725349/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:11:07,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349. 2023-07-23 05:11:07,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 013589b91f8cbbdbca6891e824725349: 2023-07-23 05:11:07,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/bb404ff7818a3f426e242c4b796057e0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:11:07,701 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0. 2023-07-23 05:11:07,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for bb404ff7818a3f426e242c4b796057e0: 2023-07-23 05:11:07,703 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=142 updating hbase:meta row=013589b91f8cbbdbca6891e824725349, regionState=CLOSED 2023-07-23 05:11:07,704 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 013589b91f8cbbdbca6891e824725349 2023-07-23 05:11:07,704 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 086f86ff85ef955a0f08b856915ee8c4 2023-07-23 05:11:07,704 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690089067703"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089067703"}]},"ts":"1690089067703"} 2023-07-23 05:11:07,704 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed bb404ff7818a3f426e242c4b796057e0 2023-07-23 05:11:07,704 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 48aab242cc2ac5d1dbb2630462bf0304 2023-07-23 05:11:07,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 48aab242cc2ac5d1dbb2630462bf0304, disabling compactions & flushes 2023-07-23 05:11:07,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 086f86ff85ef955a0f08b856915ee8c4, disabling compactions & flushes 2023-07-23 05:11:07,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304. 2023-07-23 05:11:07,706 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=bb404ff7818a3f426e242c4b796057e0, regionState=CLOSED 2023-07-23 05:11:07,706 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4. 2023-07-23 05:11:07,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4. 2023-07-23 05:11:07,706 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690089067706"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089067706"}]},"ts":"1690089067706"} 2023-07-23 05:11:07,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304. 2023-07-23 05:11:07,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4. after waiting 0 ms 2023-07-23 05:11:07,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304. after waiting 0 ms 2023-07-23 05:11:07,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304. 2023-07-23 05:11:07,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4. 2023-07-23 05:11:07,709 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=148, resume processing ppid=142 2023-07-23 05:11:07,709 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=142, state=SUCCESS; CloseRegionProcedure 013589b91f8cbbdbca6891e824725349, server=jenkins-hbase4.apache.org,45681,1690089042835 in 167 msec 2023-07-23 05:11:07,710 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=145 2023-07-23 05:11:07,710 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=145, state=SUCCESS; CloseRegionProcedure bb404ff7818a3f426e242c4b796057e0, server=jenkins-hbase4.apache.org,46173,1690089043304 in 167 msec 2023-07-23 05:11:07,711 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=013589b91f8cbbdbca6891e824725349, UNASSIGN in 174 msec 2023-07-23 05:11:07,711 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/48aab242cc2ac5d1dbb2630462bf0304/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:11:07,712 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=bb404ff7818a3f426e242c4b796057e0, UNASSIGN in 175 msec 2023-07-23 05:11:07,712 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304. 2023-07-23 05:11:07,712 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 48aab242cc2ac5d1dbb2630462bf0304: 2023-07-23 05:11:07,714 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 48aab242cc2ac5d1dbb2630462bf0304 2023-07-23 05:11:07,714 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/086f86ff85ef955a0f08b856915ee8c4/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:11:07,714 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=143 updating hbase:meta row=48aab242cc2ac5d1dbb2630462bf0304, regionState=CLOSED 2023-07-23 05:11:07,714 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690089067714"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089067714"}]},"ts":"1690089067714"} 2023-07-23 05:11:07,714 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4. 2023-07-23 05:11:07,714 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 086f86ff85ef955a0f08b856915ee8c4: 2023-07-23 05:11:07,716 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 086f86ff85ef955a0f08b856915ee8c4 2023-07-23 05:11:07,717 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close df50ed707009fa65a048d64b94129310 2023-07-23 05:11:07,718 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing df50ed707009fa65a048d64b94129310, disabling compactions & flushes 2023-07-23 05:11:07,718 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310. 2023-07-23 05:11:07,718 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310. 2023-07-23 05:11:07,718 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=086f86ff85ef955a0f08b856915ee8c4, regionState=CLOSED 2023-07-23 05:11:07,718 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310. after waiting 0 ms 2023-07-23 05:11:07,718 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310. 2023-07-23 05:11:07,718 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690089067718"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089067718"}]},"ts":"1690089067718"} 2023-07-23 05:11:07,720 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=143 2023-07-23 05:11:07,720 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=143, state=SUCCESS; CloseRegionProcedure 48aab242cc2ac5d1dbb2630462bf0304, server=jenkins-hbase4.apache.org,46173,1690089043304 in 174 msec 2023-07-23 05:11:07,723 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/Group_testDisabledTableMove/df50ed707009fa65a048d64b94129310/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:11:07,724 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=143, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=48aab242cc2ac5d1dbb2630462bf0304, UNASSIGN in 185 msec 2023-07-23 05:11:07,724 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=146, resume processing ppid=144 2023-07-23 05:11:07,724 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=144, state=SUCCESS; CloseRegionProcedure 086f86ff85ef955a0f08b856915ee8c4, server=jenkins-hbase4.apache.org,45681,1690089042835 in 181 msec 2023-07-23 05:11:07,724 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310. 2023-07-23 05:11:07,725 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for df50ed707009fa65a048d64b94129310: 2023-07-23 05:11:07,726 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=086f86ff85ef955a0f08b856915ee8c4, UNASSIGN in 189 msec 2023-07-23 05:11:07,726 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed df50ed707009fa65a048d64b94129310 2023-07-23 05:11:07,726 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=141 updating hbase:meta row=df50ed707009fa65a048d64b94129310, regionState=CLOSED 2023-07-23 05:11:07,726 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690089067726"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089067726"}]},"ts":"1690089067726"} 2023-07-23 05:11:07,729 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=147, resume processing ppid=141 2023-07-23 05:11:07,729 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=141, state=SUCCESS; CloseRegionProcedure df50ed707009fa65a048d64b94129310, server=jenkins-hbase4.apache.org,45681,1690089042835 in 187 msec 2023-07-23 05:11:07,737 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=140 2023-07-23 05:11:07,737 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=140, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=df50ed707009fa65a048d64b94129310, UNASSIGN in 194 msec 2023-07-23 05:11:07,739 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089067739"}]},"ts":"1690089067739"} 2023-07-23 05:11:07,741 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-23 05:11:07,742 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-23 05:11:07,745 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=140, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 215 msec 2023-07-23 05:11:07,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=140 2023-07-23 05:11:07,834 INFO [Listener at localhost/44477] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 140 completed 2023-07-23 05:11:07,834 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_123733219 2023-07-23 05:11:07,836 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_123733219 2023-07-23 05:11:07,838 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:07,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_123733219 2023-07-23 05:11:07,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:07,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:11:07,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-23 05:11:07,842 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_123733219, current retry=0 2023-07-23 05:11:07,842 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_123733219. 2023-07-23 05:11:07,842 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:07,849 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:07,849 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:07,852 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-23 05:11:07,852 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 05:11:07,854 INFO [Listener at localhost/44477] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-23 05:11:07,855 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-23 05:11:07,855 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:07,856 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 919 service: MasterService methodName: DisableTable size: 87 connection: 172.31.14.131:39966 deadline: 1690089127855, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-23 05:11:07,856 DEBUG [Listener at localhost/44477] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-23 05:11:07,857 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-23 05:11:07,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] procedure2.ProcedureExecutor(1029): Stored pid=152, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-23 05:11:07,859 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=152, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-23 05:11:07,859 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_123733219' 2023-07-23 05:11:07,860 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=152, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-23 05:11:07,861 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:07,862 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_123733219 2023-07-23 05:11:07,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:07,863 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:11:07,869 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/df50ed707009fa65a048d64b94129310 2023-07-23 05:11:07,869 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/bb404ff7818a3f426e242c4b796057e0 2023-07-23 05:11:07,869 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/086f86ff85ef955a0f08b856915ee8c4 2023-07-23 05:11:07,869 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/48aab242cc2ac5d1dbb2630462bf0304 2023-07-23 05:11:07,869 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/013589b91f8cbbdbca6891e824725349 2023-07-23 05:11:07,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=152 2023-07-23 05:11:07,872 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/df50ed707009fa65a048d64b94129310/f, FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/df50ed707009fa65a048d64b94129310/recovered.edits] 2023-07-23 05:11:07,872 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/013589b91f8cbbdbca6891e824725349/f, FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/013589b91f8cbbdbca6891e824725349/recovered.edits] 2023-07-23 05:11:07,872 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/086f86ff85ef955a0f08b856915ee8c4/f, FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/086f86ff85ef955a0f08b856915ee8c4/recovered.edits] 2023-07-23 05:11:07,872 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/bb404ff7818a3f426e242c4b796057e0/f, FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/bb404ff7818a3f426e242c4b796057e0/recovered.edits] 2023-07-23 05:11:07,872 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/48aab242cc2ac5d1dbb2630462bf0304/f, FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/48aab242cc2ac5d1dbb2630462bf0304/recovered.edits] 2023-07-23 05:11:07,887 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/df50ed707009fa65a048d64b94129310/recovered.edits/4.seqid to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/archive/data/default/Group_testDisabledTableMove/df50ed707009fa65a048d64b94129310/recovered.edits/4.seqid 2023-07-23 05:11:07,887 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/48aab242cc2ac5d1dbb2630462bf0304/recovered.edits/4.seqid to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/archive/data/default/Group_testDisabledTableMove/48aab242cc2ac5d1dbb2630462bf0304/recovered.edits/4.seqid 2023-07-23 05:11:07,888 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/bb404ff7818a3f426e242c4b796057e0/recovered.edits/4.seqid to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/archive/data/default/Group_testDisabledTableMove/bb404ff7818a3f426e242c4b796057e0/recovered.edits/4.seqid 2023-07-23 05:11:07,888 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/086f86ff85ef955a0f08b856915ee8c4/recovered.edits/4.seqid to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/archive/data/default/Group_testDisabledTableMove/086f86ff85ef955a0f08b856915ee8c4/recovered.edits/4.seqid 2023-07-23 05:11:07,888 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/df50ed707009fa65a048d64b94129310 2023-07-23 05:11:07,889 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/48aab242cc2ac5d1dbb2630462bf0304 2023-07-23 05:11:07,889 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/bb404ff7818a3f426e242c4b796057e0 2023-07-23 05:11:07,889 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/086f86ff85ef955a0f08b856915ee8c4 2023-07-23 05:11:07,889 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/013589b91f8cbbdbca6891e824725349/recovered.edits/4.seqid to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/archive/data/default/Group_testDisabledTableMove/013589b91f8cbbdbca6891e824725349/recovered.edits/4.seqid 2023-07-23 05:11:07,890 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/.tmp/data/default/Group_testDisabledTableMove/013589b91f8cbbdbca6891e824725349 2023-07-23 05:11:07,890 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-23 05:11:07,892 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=152, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-23 05:11:07,895 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-23 05:11:07,900 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-23 05:11:07,901 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=152, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-23 05:11:07,901 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-23 05:11:07,901 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089067901"}]},"ts":"9223372036854775807"} 2023-07-23 05:11:07,901 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089067901"}]},"ts":"9223372036854775807"} 2023-07-23 05:11:07,901 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089067901"}]},"ts":"9223372036854775807"} 2023-07-23 05:11:07,901 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089067901"}]},"ts":"9223372036854775807"} 2023-07-23 05:11:07,902 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089067901"}]},"ts":"9223372036854775807"} 2023-07-23 05:11:07,903 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-23 05:11:07,903 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => df50ed707009fa65a048d64b94129310, NAME => 'Group_testDisabledTableMove,,1690089066907.df50ed707009fa65a048d64b94129310.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 013589b91f8cbbdbca6891e824725349, NAME => 'Group_testDisabledTableMove,aaaaa,1690089066907.013589b91f8cbbdbca6891e824725349.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 48aab242cc2ac5d1dbb2630462bf0304, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690089066907.48aab242cc2ac5d1dbb2630462bf0304.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 086f86ff85ef955a0f08b856915ee8c4, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690089066907.086f86ff85ef955a0f08b856915ee8c4.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => bb404ff7818a3f426e242c4b796057e0, NAME => 'Group_testDisabledTableMove,zzzzz,1690089066907.bb404ff7818a3f426e242c4b796057e0.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-23 05:11:07,903 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-23 05:11:07,904 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690089067904"}]},"ts":"9223372036854775807"} 2023-07-23 05:11:07,905 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-23 05:11:07,908 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=152, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-23 05:11:07,909 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=152, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 51 msec 2023-07-23 05:11:07,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(1230): Checking to see if procedure is done pid=152 2023-07-23 05:11:07,972 INFO [Listener at localhost/44477] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 152 completed 2023-07-23 05:11:07,976 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:07,976 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:07,977 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:07,977 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:07,977 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:07,977 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981] to rsgroup default 2023-07-23 05:11:07,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:07,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_123733219 2023-07-23 05:11:07,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:07,981 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:11:07,982 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_123733219, current retry=0 2023-07-23 05:11:07,982 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,37441,1690089043078, jenkins-hbase4.apache.org,41981,1690089047062] are moved back to Group_testDisabledTableMove_123733219 2023-07-23 05:11:07,983 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_123733219 => default 2023-07-23 05:11:07,983 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:07,984 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_123733219 2023-07-23 05:11:07,987 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:07,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:07,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 05:11:07,990 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:07,991 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:07,991 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:07,991 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:07,992 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:11:07,992 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:07,993 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:11:07,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:07,997 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:11:07,999 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:08,002 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:11:08,002 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:11:08,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:08,005 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:08,007 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:08,008 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:08,011 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:08,011 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:08,013 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37433] to rsgroup master 2023-07-23 05:11:08,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:08,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 953 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39966 deadline: 1690090268013, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. 2023-07-23 05:11:08,014 WARN [Listener at localhost/44477] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:11:08,016 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:08,016 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:08,016 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:08,017 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981, jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:46173], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:11:08,017 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:08,017 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:08,042 INFO [Listener at localhost/44477] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=513 (was 510) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_493393426_17 at /127.0.0.1:43572 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-596213111_17 at /127.0.0.1:43528 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a5e2fc3-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2db71259-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=808 (was 775) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=515 (was 508) - SystemLoadAverage LEAK? -, ProcessCount=179 (was 179), AvailableMemoryMB=6338 (was 6357) 2023-07-23 05:11:08,042 WARN [Listener at localhost/44477] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-23 05:11:08,077 INFO [Listener at localhost/44477] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=513, OpenFileDescriptor=808, MaxFileDescriptor=60000, SystemLoadAverage=515, ProcessCount=179, AvailableMemoryMB=6337 2023-07-23 05:11:08,077 WARN [Listener at localhost/44477] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-23 05:11:08,077 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-23 05:11:08,081 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:08,081 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:08,082 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:08,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:08,082 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:08,083 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:11:08,083 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:08,084 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:11:08,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:08,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:11:08,089 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:08,092 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:11:08,093 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:11:08,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:08,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:08,098 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:08,102 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:08,105 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:08,105 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:08,107 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:37433] to rsgroup master 2023-07-23 05:11:08,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:08,107 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] ipc.CallRunner(144): callId: 981 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:39966 deadline: 1690090268107, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. 2023-07-23 05:11:08,108 WARN [Listener at localhost/44477] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:37433 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:11:08,109 INFO [Listener at localhost/44477] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:08,110 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:08,110 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:08,111 INFO [Listener at localhost/44477] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:37441, jenkins-hbase4.apache.org:41981, jenkins-hbase4.apache.org:45681, jenkins-hbase4.apache.org:46173], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:11:08,111 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:08,111 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37433] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:08,112 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-23 05:11:08,112 INFO [Listener at localhost/44477] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-23 05:11:08,112 DEBUG [Listener at localhost/44477] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x518a774a to 127.0.0.1:63392 2023-07-23 05:11:08,112 DEBUG [Listener at localhost/44477] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:08,116 DEBUG [Listener at localhost/44477] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-23 05:11:08,116 DEBUG [Listener at localhost/44477] util.JVMClusterUtil(257): Found active master hash=217987863, stopped=false 2023-07-23 05:11:08,117 DEBUG [Listener at localhost/44477] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 05:11:08,117 DEBUG [Listener at localhost/44477] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 05:11:08,117 INFO [Listener at localhost/44477] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,37433,1690089040778 2023-07-23 05:11:08,119 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:08,119 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:08,119 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:41981-0x1019096aaec000b, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:08,119 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:08,119 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:08,119 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:08,119 INFO [Listener at localhost/44477] procedure2.ProcedureExecutor(629): Stopping 2023-07-23 05:11:08,120 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:08,120 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:08,120 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:08,120 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41981-0x1019096aaec000b, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:08,120 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:08,121 DEBUG [Listener at localhost/44477] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4be9f084 to 127.0.0.1:63392 2023-07-23 05:11:08,121 DEBUG [Listener at localhost/44477] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:08,121 INFO [Listener at localhost/44477] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45681,1690089042835' ***** 2023-07-23 05:11:08,122 INFO [Listener at localhost/44477] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 05:11:08,122 INFO [Listener at localhost/44477] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37441,1690089043078' ***** 2023-07-23 05:11:08,122 INFO [Listener at localhost/44477] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 05:11:08,122 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 05:11:08,131 INFO [Listener at localhost/44477] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46173,1690089043304' ***** 2023-07-23 05:11:08,131 INFO [Listener at localhost/44477] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 05:11:08,132 INFO [RS:1;jenkins-hbase4:37441] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 05:11:08,132 INFO [Listener at localhost/44477] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41981,1690089047062' ***** 2023-07-23 05:11:08,132 INFO [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 05:11:08,132 INFO [Listener at localhost/44477] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 05:11:08,133 INFO [RS:3;jenkins-hbase4:41981] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 05:11:08,148 INFO [RS:3;jenkins-hbase4:41981] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5bcd3e79{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:11:08,149 INFO [RS:2;jenkins-hbase4:46173] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2f709731{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:11:08,149 INFO [RS:1;jenkins-hbase4:37441] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@46ffcd75{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:11:08,148 INFO [RS:0;jenkins-hbase4:45681] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6770849c{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:11:08,149 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 05:11:08,149 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 05:11:08,149 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:08,149 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:08,152 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:08,152 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 05:11:08,152 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 05:11:08,152 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:08,153 INFO [RS:2;jenkins-hbase4:46173] server.AbstractConnector(383): Stopped ServerConnector@206a46fd{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 05:11:08,154 INFO [RS:0;jenkins-hbase4:45681] server.AbstractConnector(383): Stopped ServerConnector@24b5075d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 05:11:08,153 INFO [RS:1;jenkins-hbase4:37441] server.AbstractConnector(383): Stopped ServerConnector@7f6e0343{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 05:11:08,153 INFO [RS:3;jenkins-hbase4:41981] server.AbstractConnector(383): Stopped ServerConnector@5a5d92bc{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 05:11:08,154 INFO [RS:1;jenkins-hbase4:37441] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 05:11:08,154 INFO [RS:0;jenkins-hbase4:45681] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 05:11:08,154 INFO [RS:2;jenkins-hbase4:46173] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 05:11:08,155 INFO [RS:0;jenkins-hbase4:45681] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@282c1c14{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 05:11:08,154 INFO [RS:3;jenkins-hbase4:41981] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 05:11:08,155 INFO [RS:1;jenkins-hbase4:37441] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2c8a8cb2{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 05:11:08,159 INFO [RS:2;jenkins-hbase4:46173] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5109bb49{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 05:11:08,159 INFO [RS:1;jenkins-hbase4:37441] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@539fa719{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/hadoop.log.dir/,STOPPED} 2023-07-23 05:11:08,160 INFO [RS:2;jenkins-hbase4:46173] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@67c48ba9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/hadoop.log.dir/,STOPPED} 2023-07-23 05:11:08,159 INFO [RS:0;jenkins-hbase4:45681] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@10630bfe{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/hadoop.log.dir/,STOPPED} 2023-07-23 05:11:08,159 INFO [RS:3;jenkins-hbase4:41981] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1516c024{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 05:11:08,161 INFO [RS:3;jenkins-hbase4:41981] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1439103a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/hadoop.log.dir/,STOPPED} 2023-07-23 05:11:08,164 INFO [RS:0;jenkins-hbase4:45681] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 05:11:08,165 INFO [RS:0;jenkins-hbase4:45681] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 05:11:08,165 INFO [RS:0;jenkins-hbase4:45681] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 05:11:08,165 INFO [RS:2;jenkins-hbase4:46173] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 05:11:08,165 INFO [RS:2;jenkins-hbase4:46173] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 05:11:08,165 INFO [RS:2;jenkins-hbase4:46173] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 05:11:08,165 INFO [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(3305): Received CLOSE for b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:08,166 INFO [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(3305): Received CLOSE for a6558fab23b07212eec6b6a195311310 2023-07-23 05:11:08,166 INFO [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(3305): Received CLOSE for a44c79c49f6bdbba941d693414528c24 2023-07-23 05:11:08,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b29871ac63cf64acce94f886ab9279a6, disabling compactions & flushes 2023-07-23 05:11:08,167 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:08,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:08,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. after waiting 0 ms 2023-07-23 05:11:08,167 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:08,167 INFO [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:11:08,167 DEBUG [RS:2;jenkins-hbase4:46173] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0a76d190 to 127.0.0.1:63392 2023-07-23 05:11:08,167 DEBUG [RS:2;jenkins-hbase4:46173] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:08,168 INFO [RS:2;jenkins-hbase4:46173] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 05:11:08,168 INFO [RS:2;jenkins-hbase4:46173] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 05:11:08,168 INFO [RS:2;jenkins-hbase4:46173] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 05:11:08,168 INFO [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-23 05:11:08,168 INFO [RS:3;jenkins-hbase4:41981] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 05:11:08,167 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(3305): Received CLOSE for be20752d9b84181a034cc1472048379b 2023-07-23 05:11:08,168 INFO [RS:3;jenkins-hbase4:41981] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 05:11:08,168 INFO [RS:1;jenkins-hbase4:37441] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 05:11:08,168 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:11:08,168 INFO [RS:1;jenkins-hbase4:37441] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 05:11:08,168 INFO [RS:1;jenkins-hbase4:37441] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 05:11:08,168 INFO [RS:1;jenkins-hbase4:37441] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:11:08,168 DEBUG [RS:1;jenkins-hbase4:37441] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x600bfc99 to 127.0.0.1:63392 2023-07-23 05:11:08,168 DEBUG [RS:1;jenkins-hbase4:37441] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:08,169 INFO [RS:1;jenkins-hbase4:37441] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37441,1690089043078; all regions closed. 2023-07-23 05:11:08,168 INFO [RS:3;jenkins-hbase4:41981] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 05:11:08,169 INFO [RS:3;jenkins-hbase4:41981] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:11:08,169 DEBUG [RS:3;jenkins-hbase4:41981] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5a8d6a49 to 127.0.0.1:63392 2023-07-23 05:11:08,169 DEBUG [RS:3;jenkins-hbase4:41981] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:08,169 INFO [RS:3;jenkins-hbase4:41981] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41981,1690089047062; all regions closed. 2023-07-23 05:11:08,168 DEBUG [RS:0;jenkins-hbase4:45681] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7aa6fe8f to 127.0.0.1:63392 2023-07-23 05:11:08,170 DEBUG [RS:0;jenkins-hbase4:45681] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:08,170 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-23 05:11:08,170 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1478): Online Regions={be20752d9b84181a034cc1472048379b=testRename,,1690089061244.be20752d9b84181a034cc1472048379b.} 2023-07-23 05:11:08,170 DEBUG [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1504): Waiting on be20752d9b84181a034cc1472048379b 2023-07-23 05:11:08,177 INFO [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-23 05:11:08,177 DEBUG [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(1478): Online Regions={b29871ac63cf64acce94f886ab9279a6=unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6., 1588230740=hbase:meta,,1.1588230740, a6558fab23b07212eec6b6a195311310=hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310., a44c79c49f6bdbba941d693414528c24=hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24.} 2023-07-23 05:11:08,178 DEBUG [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(1504): Waiting on 1588230740, a44c79c49f6bdbba941d693414528c24, a6558fab23b07212eec6b6a195311310, b29871ac63cf64acce94f886ab9279a6 2023-07-23 05:11:08,178 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 05:11:08,178 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 05:11:08,178 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 05:11:08,178 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 05:11:08,178 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 05:11:08,178 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=75.41 KB heapSize=118.59 KB 2023-07-23 05:11:08,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing be20752d9b84181a034cc1472048379b, disabling compactions & flushes 2023-07-23 05:11:08,181 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:08,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:08,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690089061244.be20752d9b84181a034cc1472048379b. after waiting 0 ms 2023-07-23 05:11:08,181 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:08,225 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/unmovedTable/b29871ac63cf64acce94f886ab9279a6/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-23 05:11:08,228 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:08,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b29871ac63cf64acce94f886ab9279a6: 2023-07-23 05:11:08,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1690089062917.b29871ac63cf64acce94f886ab9279a6. 2023-07-23 05:11:08,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a6558fab23b07212eec6b6a195311310, disabling compactions & flushes 2023-07-23 05:11:08,228 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. 2023-07-23 05:11:08,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. 2023-07-23 05:11:08,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. after waiting 0 ms 2023-07-23 05:11:08,228 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. 2023-07-23 05:11:08,228 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing a6558fab23b07212eec6b6a195311310 1/1 column families, dataSize=22.07 KB heapSize=36.54 KB 2023-07-23 05:11:08,234 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/default/testRename/be20752d9b84181a034cc1472048379b/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-23 05:11:08,250 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:08,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for be20752d9b84181a034cc1472048379b: 2023-07-23 05:11:08,250 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1690089061244.be20752d9b84181a034cc1472048379b. 2023-07-23 05:11:08,253 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/WALs/jenkins-hbase4.apache.org,41981,1690089047062/jenkins-hbase4.apache.org%2C41981%2C1690089047062.1690089047500 not finished, retry = 0 2023-07-23 05:11:08,297 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=69.60 KB at sequenceid=206 (bloomFilter=false), to=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/.tmp/info/c17a3d31e14b4677a293a2b00b8be38f 2023-07-23 05:11:08,303 DEBUG [RS:1;jenkins-hbase4:37441] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/oldWALs 2023-07-23 05:11:08,303 INFO [RS:1;jenkins-hbase4:37441] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37441%2C1690089043078.meta:.meta(num 1690089045662) 2023-07-23 05:11:08,310 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=22.07 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310/.tmp/m/ce2738ed547f4e228b77000383d5dcc9 2023-07-23 05:11:08,311 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c17a3d31e14b4677a293a2b00b8be38f 2023-07-23 05:11:08,330 DEBUG [RS:1;jenkins-hbase4:37441] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/oldWALs 2023-07-23 05:11:08,330 INFO [RS:1;jenkins-hbase4:37441] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37441%2C1690089043078:(num 1690089045538) 2023-07-23 05:11:08,330 DEBUG [RS:1;jenkins-hbase4:37441] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:08,331 INFO [RS:1;jenkins-hbase4:37441] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:08,331 INFO [RS:1;jenkins-hbase4:37441] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 05:11:08,331 INFO [RS:1;jenkins-hbase4:37441] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 05:11:08,331 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 05:11:08,332 INFO [RS:1;jenkins-hbase4:37441] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 05:11:08,332 INFO [RS:1;jenkins-hbase4:37441] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 05:11:08,333 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ce2738ed547f4e228b77000383d5dcc9 2023-07-23 05:11:08,334 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310/.tmp/m/ce2738ed547f4e228b77000383d5dcc9 as hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310/m/ce2738ed547f4e228b77000383d5dcc9 2023-07-23 05:11:08,339 INFO [RS:1;jenkins-hbase4:37441] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37441 2023-07-23 05:11:08,361 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:41981-0x1019096aaec000b, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:11:08,362 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:41981-0x1019096aaec000b, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:08,362 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:11:08,362 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:11:08,362 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:08,362 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37441,1690089043078 2023-07-23 05:11:08,362 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:08,362 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:08,362 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:08,363 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37441,1690089043078] 2023-07-23 05:11:08,363 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37441,1690089043078; numProcessing=1 2023-07-23 05:11:08,365 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ce2738ed547f4e228b77000383d5dcc9 2023-07-23 05:11:08,366 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310/m/ce2738ed547f4e228b77000383d5dcc9, entries=22, sequenceid=101, filesize=5.9 K 2023-07-23 05:11:08,366 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-23 05:11:08,366 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-23 05:11:08,367 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37441,1690089043078 already deleted, retry=false 2023-07-23 05:11:08,367 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37441,1690089043078 expired; onlineServers=3 2023-07-23 05:11:08,367 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~22.07 KB/22601, heapSize ~36.52 KB/37400, currentSize=0 B/0 for a6558fab23b07212eec6b6a195311310 in 139ms, sequenceid=101, compaction requested=false 2023-07-23 05:11:08,369 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-23 05:11:08,369 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-23 05:11:08,376 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45681,1690089042835; all regions closed. 2023-07-23 05:11:08,378 DEBUG [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(1504): Waiting on 1588230740, a44c79c49f6bdbba941d693414528c24, a6558fab23b07212eec6b6a195311310 2023-07-23 05:11:08,382 DEBUG [RS:3;jenkins-hbase4:41981] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/oldWALs 2023-07-23 05:11:08,383 INFO [RS:3;jenkins-hbase4:41981] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41981%2C1690089047062:(num 1690089047500) 2023-07-23 05:11:08,383 DEBUG [RS:3;jenkins-hbase4:41981] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:08,383 INFO [RS:3;jenkins-hbase4:41981] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:08,386 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-23 05:11:08,386 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-23 05:11:08,391 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=206 (bloomFilter=false), to=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/.tmp/rep_barrier/968f9cb6d4f44166808c3df6ef791e44 2023-07-23 05:11:08,393 INFO [RS:3;jenkins-hbase4:41981] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 05:11:08,405 INFO [RS:3;jenkins-hbase4:41981] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 05:11:08,405 INFO [RS:3;jenkins-hbase4:41981] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 05:11:08,405 INFO [RS:3;jenkins-hbase4:41981] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 05:11:08,405 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 05:11:08,408 INFO [RS:3;jenkins-hbase4:41981] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41981 2023-07-23 05:11:08,409 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 968f9cb6d4f44166808c3df6ef791e44 2023-07-23 05:11:08,410 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:08,410 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:11:08,410 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:11:08,410 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:41981-0x1019096aaec000b, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41981,1690089047062 2023-07-23 05:11:08,413 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41981,1690089047062] 2023-07-23 05:11:08,413 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41981,1690089047062; numProcessing=2 2023-07-23 05:11:08,416 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41981,1690089047062 already deleted, retry=false 2023-07-23 05:11:08,416 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41981,1690089047062 expired; onlineServers=2 2023-07-23 05:11:08,433 DEBUG [RS:0;jenkins-hbase4:45681] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/oldWALs 2023-07-23 05:11:08,433 INFO [RS:0;jenkins-hbase4:45681] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45681%2C1690089042835:(num 1690089045538) 2023-07-23 05:11:08,433 DEBUG [RS:0;jenkins-hbase4:45681] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:08,433 INFO [RS:0;jenkins-hbase4:45681] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:08,433 INFO [RS:0;jenkins-hbase4:45681] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 05:11:08,433 INFO [RS:0;jenkins-hbase4:45681] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 05:11:08,433 INFO [RS:0;jenkins-hbase4:45681] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 05:11:08,433 INFO [RS:0;jenkins-hbase4:45681] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 05:11:08,434 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 05:11:08,435 INFO [RS:0;jenkins-hbase4:45681] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45681 2023-07-23 05:11:08,437 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:11:08,437 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45681,1690089042835 2023-07-23 05:11:08,437 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:08,439 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45681,1690089042835] 2023-07-23 05:11:08,439 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45681,1690089042835; numProcessing=3 2023-07-23 05:11:08,439 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/rsgroup/a6558fab23b07212eec6b6a195311310/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=29 2023-07-23 05:11:08,440 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45681,1690089042835 already deleted, retry=false 2023-07-23 05:11:08,440 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45681,1690089042835 expired; onlineServers=1 2023-07-23 05:11:08,440 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 05:11:08,441 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. 2023-07-23 05:11:08,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a6558fab23b07212eec6b6a195311310: 2023-07-23 05:11:08,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690089046172.a6558fab23b07212eec6b6a195311310. 2023-07-23 05:11:08,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a44c79c49f6bdbba941d693414528c24, disabling compactions & flushes 2023-07-23 05:11:08,441 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24. 2023-07-23 05:11:08,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24. 2023-07-23 05:11:08,441 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24. after waiting 0 ms 2023-07-23 05:11:08,442 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24. 2023-07-23 05:11:08,442 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing a44c79c49f6bdbba941d693414528c24 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-23 05:11:08,455 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.81 KB at sequenceid=206 (bloomFilter=false), to=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/.tmp/table/dd27494d1fbc40a8baf3be540f43b156 2023-07-23 05:11:08,470 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dd27494d1fbc40a8baf3be540f43b156 2023-07-23 05:11:08,471 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/.tmp/info/c17a3d31e14b4677a293a2b00b8be38f as hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/info/c17a3d31e14b4677a293a2b00b8be38f 2023-07-23 05:11:08,478 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for c17a3d31e14b4677a293a2b00b8be38f 2023-07-23 05:11:08,478 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/info/c17a3d31e14b4677a293a2b00b8be38f, entries=83, sequenceid=206, filesize=14.3 K 2023-07-23 05:11:08,482 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/.tmp/rep_barrier/968f9cb6d4f44166808c3df6ef791e44 as hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/rep_barrier/968f9cb6d4f44166808c3df6ef791e44 2023-07-23 05:11:08,482 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/namespace/a44c79c49f6bdbba941d693414528c24/.tmp/info/c0fe1aa640264993a2e690f20b9381d3 2023-07-23 05:11:08,489 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/namespace/a44c79c49f6bdbba941d693414528c24/.tmp/info/c0fe1aa640264993a2e690f20b9381d3 as hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/namespace/a44c79c49f6bdbba941d693414528c24/info/c0fe1aa640264993a2e690f20b9381d3 2023-07-23 05:11:08,489 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 968f9cb6d4f44166808c3df6ef791e44 2023-07-23 05:11:08,489 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/rep_barrier/968f9cb6d4f44166808c3df6ef791e44, entries=18, sequenceid=206, filesize=6.9 K 2023-07-23 05:11:08,493 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/.tmp/table/dd27494d1fbc40a8baf3be540f43b156 as hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/table/dd27494d1fbc40a8baf3be540f43b156 2023-07-23 05:11:08,497 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/namespace/a44c79c49f6bdbba941d693414528c24/info/c0fe1aa640264993a2e690f20b9381d3, entries=2, sequenceid=6, filesize=4.8 K 2023-07-23 05:11:08,498 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for a44c79c49f6bdbba941d693414528c24 in 56ms, sequenceid=6, compaction requested=false 2023-07-23 05:11:08,501 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for dd27494d1fbc40a8baf3be540f43b156 2023-07-23 05:11:08,501 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/table/dd27494d1fbc40a8baf3be540f43b156, entries=27, sequenceid=206, filesize=7.2 K 2023-07-23 05:11:08,502 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~75.41 KB/77223, heapSize ~118.54 KB/121384, currentSize=0 B/0 for 1588230740 in 324ms, sequenceid=206, compaction requested=false 2023-07-23 05:11:08,509 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/namespace/a44c79c49f6bdbba941d693414528c24/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-23 05:11:08,510 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24. 2023-07-23 05:11:08,510 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a44c79c49f6bdbba941d693414528c24: 2023-07-23 05:11:08,510 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690089046016.a44c79c49f6bdbba941d693414528c24. 2023-07-23 05:11:08,512 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/data/hbase/meta/1588230740/recovered.edits/209.seqid, newMaxSeqId=209, maxSeqId=17 2023-07-23 05:11:08,512 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 05:11:08,513 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 05:11:08,513 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 05:11:08,513 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-23 05:11:08,578 INFO [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46173,1690089043304; all regions closed. 2023-07-23 05:11:08,588 DEBUG [RS:2;jenkins-hbase4:46173] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/oldWALs 2023-07-23 05:11:08,589 INFO [RS:2;jenkins-hbase4:46173] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46173%2C1690089043304.meta:.meta(num 1690089048295) 2023-07-23 05:11:08,603 DEBUG [RS:2;jenkins-hbase4:46173] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/oldWALs 2023-07-23 05:11:08,603 INFO [RS:2;jenkins-hbase4:46173] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46173%2C1690089043304:(num 1690089045537) 2023-07-23 05:11:08,603 DEBUG [RS:2;jenkins-hbase4:46173] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:08,603 INFO [RS:2;jenkins-hbase4:46173] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:08,604 INFO [RS:2;jenkins-hbase4:46173] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 05:11:08,604 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 05:11:08,605 INFO [RS:2;jenkins-hbase4:46173] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46173 2023-07-23 05:11:08,607 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:08,607 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46173,1690089043304 2023-07-23 05:11:08,607 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46173,1690089043304] 2023-07-23 05:11:08,607 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46173,1690089043304; numProcessing=4 2023-07-23 05:11:08,609 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46173,1690089043304 already deleted, retry=false 2023-07-23 05:11:08,609 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46173,1690089043304 expired; onlineServers=0 2023-07-23 05:11:08,609 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37433,1690089040778' ***** 2023-07-23 05:11:08,609 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-23 05:11:08,610 DEBUG [M:0;jenkins-hbase4:37433] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4dc3312c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 05:11:08,610 INFO [M:0;jenkins-hbase4:37433] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 05:11:08,613 INFO [M:0;jenkins-hbase4:37433] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2b2c25c2{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 05:11:08,614 INFO [M:0;jenkins-hbase4:37433] server.AbstractConnector(383): Stopped ServerConnector@35ac267a{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 05:11:08,614 INFO [M:0;jenkins-hbase4:37433] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 05:11:08,614 INFO [M:0;jenkins-hbase4:37433] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2c64b740{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 05:11:08,615 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-23 05:11:08,615 INFO [M:0;jenkins-hbase4:37433] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4ad2bb29{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/hadoop.log.dir/,STOPPED} 2023-07-23 05:11:08,615 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:08,615 INFO [M:0;jenkins-hbase4:37433] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37433,1690089040778 2023-07-23 05:11:08,615 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 05:11:08,616 INFO [M:0;jenkins-hbase4:37433] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37433,1690089040778; all regions closed. 2023-07-23 05:11:08,616 DEBUG [M:0;jenkins-hbase4:37433] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:08,616 INFO [M:0;jenkins-hbase4:37433] master.HMaster(1491): Stopping master jetty server 2023-07-23 05:11:08,616 INFO [M:0;jenkins-hbase4:37433] server.AbstractConnector(383): Stopped ServerConnector@293472ac{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 05:11:08,617 DEBUG [M:0;jenkins-hbase4:37433] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-23 05:11:08,617 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-23 05:11:08,617 DEBUG [M:0;jenkins-hbase4:37433] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-23 05:11:08,617 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690089045069] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690089045069,5,FailOnTimeoutGroup] 2023-07-23 05:11:08,617 INFO [M:0;jenkins-hbase4:37433] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-23 05:11:08,617 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690089045069] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690089045069,5,FailOnTimeoutGroup] 2023-07-23 05:11:08,617 INFO [M:0;jenkins-hbase4:37433] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-23 05:11:08,618 INFO [M:0;jenkins-hbase4:37433] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-23 05:11:08,618 DEBUG [M:0;jenkins-hbase4:37433] master.HMaster(1512): Stopping service threads 2023-07-23 05:11:08,618 INFO [M:0;jenkins-hbase4:37433] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-23 05:11:08,618 ERROR [M:0;jenkins-hbase4:37433] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-23 05:11:08,619 INFO [M:0;jenkins-hbase4:37433] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-23 05:11:08,619 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-23 05:11:08,619 DEBUG [M:0;jenkins-hbase4:37433] zookeeper.ZKUtil(398): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-23 05:11:08,620 WARN [M:0;jenkins-hbase4:37433] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-23 05:11:08,620 INFO [M:0;jenkins-hbase4:37433] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-23 05:11:08,620 INFO [M:0;jenkins-hbase4:37433] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-23 05:11:08,620 DEBUG [M:0;jenkins-hbase4:37433] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 05:11:08,620 INFO [M:0;jenkins-hbase4:37433] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:08,620 DEBUG [M:0;jenkins-hbase4:37433] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:08,620 DEBUG [M:0;jenkins-hbase4:37433] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 05:11:08,620 DEBUG [M:0;jenkins-hbase4:37433] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:08,620 INFO [M:0;jenkins-hbase4:37433] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=509.35 KB heapSize=609.36 KB 2023-07-23 05:11:08,639 INFO [M:0;jenkins-hbase4:37433] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=509.35 KB at sequenceid=1128 (bloomFilter=true), to=hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/0a3900ff4fb840c98ae3d7528530e3f9 2023-07-23 05:11:08,645 DEBUG [M:0;jenkins-hbase4:37433] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/0a3900ff4fb840c98ae3d7528530e3f9 as hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0a3900ff4fb840c98ae3d7528530e3f9 2023-07-23 05:11:08,651 INFO [M:0;jenkins-hbase4:37433] regionserver.HStore(1080): Added hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0a3900ff4fb840c98ae3d7528530e3f9, entries=151, sequenceid=1128, filesize=26.6 K 2023-07-23 05:11:08,652 INFO [M:0;jenkins-hbase4:37433] regionserver.HRegion(2948): Finished flush of dataSize ~509.35 KB/521577, heapSize ~609.34 KB/623968, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 32ms, sequenceid=1128, compaction requested=false 2023-07-23 05:11:08,656 INFO [M:0;jenkins-hbase4:37433] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:08,656 DEBUG [M:0;jenkins-hbase4:37433] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 05:11:08,661 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 05:11:08,662 INFO [M:0;jenkins-hbase4:37433] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-23 05:11:08,662 INFO [M:0;jenkins-hbase4:37433] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37433 2023-07-23 05:11:08,664 DEBUG [M:0;jenkins-hbase4:37433] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,37433,1690089040778 already deleted, retry=false 2023-07-23 05:11:08,765 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:08,765 INFO [RS:2;jenkins-hbase4:46173] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46173,1690089043304; zookeeper connection closed. 2023-07-23 05:11:08,765 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:46173-0x1019096aaec0003, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:08,765 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5584bd81] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5584bd81 2023-07-23 05:11:08,865 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:08,865 INFO [RS:0;jenkins-hbase4:45681] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45681,1690089042835; zookeeper connection closed. 2023-07-23 05:11:08,865 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:45681-0x1019096aaec0001, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:08,865 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2eea2431] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2eea2431 2023-07-23 05:11:08,965 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:41981-0x1019096aaec000b, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:08,965 INFO [RS:3;jenkins-hbase4:41981] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41981,1690089047062; zookeeper connection closed. 2023-07-23 05:11:08,965 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:41981-0x1019096aaec000b, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:08,965 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1d95e360] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1d95e360 2023-07-23 05:11:09,065 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:09,065 INFO [RS:1;jenkins-hbase4:37441] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37441,1690089043078; zookeeper connection closed. 2023-07-23 05:11:09,065 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): regionserver:37441-0x1019096aaec0002, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:09,066 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@635a53a0] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@635a53a0 2023-07-23 05:11:09,066 INFO [Listener at localhost/44477] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-23 05:11:09,166 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:09,166 INFO [M:0;jenkins-hbase4:37433] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37433,1690089040778; zookeeper connection closed. 2023-07-23 05:11:09,166 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): master:37433-0x1019096aaec0000, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:09,168 WARN [Listener at localhost/44477] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 05:11:09,171 INFO [Listener at localhost/44477] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 05:11:09,275 WARN [BP-1430322893-172.31.14.131-1690089037211 heartbeating to localhost/127.0.0.1:36893] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 05:11:09,275 WARN [BP-1430322893-172.31.14.131-1690089037211 heartbeating to localhost/127.0.0.1:36893] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1430322893-172.31.14.131-1690089037211 (Datanode Uuid 6caa7b21-cb48-49f8-bb38-712cb611ee48) service to localhost/127.0.0.1:36893 2023-07-23 05:11:09,277 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/cluster_fda3020b-7147-ecd1-5879-e2eb4316fd79/dfs/data/data5/current/BP-1430322893-172.31.14.131-1690089037211] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 05:11:09,277 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/cluster_fda3020b-7147-ecd1-5879-e2eb4316fd79/dfs/data/data6/current/BP-1430322893-172.31.14.131-1690089037211] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 05:11:09,279 WARN [Listener at localhost/44477] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 05:11:09,282 INFO [Listener at localhost/44477] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 05:11:09,286 WARN [BP-1430322893-172.31.14.131-1690089037211 heartbeating to localhost/127.0.0.1:36893] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 05:11:09,286 WARN [BP-1430322893-172.31.14.131-1690089037211 heartbeating to localhost/127.0.0.1:36893] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1430322893-172.31.14.131-1690089037211 (Datanode Uuid 1df2bbdc-3ed1-47fb-8373-7429a6af5df3) service to localhost/127.0.0.1:36893 2023-07-23 05:11:09,286 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/cluster_fda3020b-7147-ecd1-5879-e2eb4316fd79/dfs/data/data3/current/BP-1430322893-172.31.14.131-1690089037211] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 05:11:09,287 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/cluster_fda3020b-7147-ecd1-5879-e2eb4316fd79/dfs/data/data4/current/BP-1430322893-172.31.14.131-1690089037211] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 05:11:09,288 WARN [Listener at localhost/44477] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 05:11:09,292 INFO [Listener at localhost/44477] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 05:11:09,395 WARN [BP-1430322893-172.31.14.131-1690089037211 heartbeating to localhost/127.0.0.1:36893] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 05:11:09,395 WARN [BP-1430322893-172.31.14.131-1690089037211 heartbeating to localhost/127.0.0.1:36893] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1430322893-172.31.14.131-1690089037211 (Datanode Uuid 86f67442-7973-4a1c-bf80-2134abfef945) service to localhost/127.0.0.1:36893 2023-07-23 05:11:09,396 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/cluster_fda3020b-7147-ecd1-5879-e2eb4316fd79/dfs/data/data1/current/BP-1430322893-172.31.14.131-1690089037211] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 05:11:09,396 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/cluster_fda3020b-7147-ecd1-5879-e2eb4316fd79/dfs/data/data2/current/BP-1430322893-172.31.14.131-1690089037211] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 05:11:09,426 INFO [Listener at localhost/44477] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 05:11:09,546 INFO [Listener at localhost/44477] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-23 05:11:09,595 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-23 05:11:09,596 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-23 05:11:09,596 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/hadoop.log.dir so I do NOT create it in target/test-data/e5462a79-6102-6d72-1294-bfd010974c13 2023-07-23 05:11:09,596 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/24d26811-9e7e-75c5-0ea3-8aa80a9c60e3/hadoop.tmp.dir so I do NOT create it in target/test-data/e5462a79-6102-6d72-1294-bfd010974c13 2023-07-23 05:11:09,596 INFO [Listener at localhost/44477] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/cluster_599f895e-8c1f-63ff-627f-5d6c5c28e013, deleteOnExit=true 2023-07-23 05:11:09,596 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-23 05:11:09,596 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/test.cache.data in system properties and HBase conf 2023-07-23 05:11:09,596 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/hadoop.tmp.dir in system properties and HBase conf 2023-07-23 05:11:09,596 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/hadoop.log.dir in system properties and HBase conf 2023-07-23 05:11:09,596 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-23 05:11:09,596 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-23 05:11:09,597 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-23 05:11:09,597 DEBUG [Listener at localhost/44477] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-23 05:11:09,597 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-23 05:11:09,597 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-23 05:11:09,597 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-23 05:11:09,597 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 05:11:09,597 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-23 05:11:09,597 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-23 05:11:09,597 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 05:11:09,598 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 05:11:09,598 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-23 05:11:09,598 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/nfs.dump.dir in system properties and HBase conf 2023-07-23 05:11:09,598 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/java.io.tmpdir in system properties and HBase conf 2023-07-23 05:11:09,598 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 05:11:09,598 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-23 05:11:09,598 INFO [Listener at localhost/44477] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-23 05:11:09,603 WARN [Listener at localhost/44477] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 05:11:09,603 WARN [Listener at localhost/44477] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 05:11:09,643 DEBUG [Listener at localhost/44477-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1019096aaec000a, quorum=127.0.0.1:63392, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-23 05:11:09,643 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1019096aaec000a, quorum=127.0.0.1:63392, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-23 05:11:09,654 WARN [Listener at localhost/44477] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 05:11:09,656 INFO [Listener at localhost/44477] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 05:11:09,661 INFO [Listener at localhost/44477] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/java.io.tmpdir/Jetty_localhost_44459_hdfs____43htz6/webapp 2023-07-23 05:11:09,756 INFO [Listener at localhost/44477] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44459 2023-07-23 05:11:09,762 WARN [Listener at localhost/44477] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 05:11:09,762 WARN [Listener at localhost/44477] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 05:11:09,806 WARN [Listener at localhost/44369] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 05:11:09,820 WARN [Listener at localhost/44369] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 05:11:09,822 WARN [Listener at localhost/44369] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 05:11:09,823 INFO [Listener at localhost/44369] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 05:11:09,828 INFO [Listener at localhost/44369] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/java.io.tmpdir/Jetty_localhost_36263_datanode____.3q9a7h/webapp 2023-07-23 05:11:09,922 INFO [Listener at localhost/44369] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36263 2023-07-23 05:11:09,928 WARN [Listener at localhost/36093] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 05:11:09,943 WARN [Listener at localhost/36093] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 05:11:09,945 WARN [Listener at localhost/36093] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 05:11:09,946 INFO [Listener at localhost/36093] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 05:11:09,949 INFO [Listener at localhost/36093] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/java.io.tmpdir/Jetty_localhost_42185_datanode____12dt3r/webapp 2023-07-23 05:11:10,038 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x84a9de3efa791382: Processing first storage report for DS-5cb00f08-f655-4094-b517-e0140a3e44b9 from datanode c5e774cd-89a3-4f5e-9980-3cc55251aea2 2023-07-23 05:11:10,039 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x84a9de3efa791382: from storage DS-5cb00f08-f655-4094-b517-e0140a3e44b9 node DatanodeRegistration(127.0.0.1:32981, datanodeUuid=c5e774cd-89a3-4f5e-9980-3cc55251aea2, infoPort=36885, infoSecurePort=0, ipcPort=36093, storageInfo=lv=-57;cid=testClusterID;nsid=198894400;c=1690089069606), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 05:11:10,039 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x84a9de3efa791382: Processing first storage report for DS-72bab835-2a99-47d4-924b-0351ac4876db from datanode c5e774cd-89a3-4f5e-9980-3cc55251aea2 2023-07-23 05:11:10,039 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x84a9de3efa791382: from storage DS-72bab835-2a99-47d4-924b-0351ac4876db node DatanodeRegistration(127.0.0.1:32981, datanodeUuid=c5e774cd-89a3-4f5e-9980-3cc55251aea2, infoPort=36885, infoSecurePort=0, ipcPort=36093, storageInfo=lv=-57;cid=testClusterID;nsid=198894400;c=1690089069606), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 05:11:10,061 INFO [Listener at localhost/36093] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42185 2023-07-23 05:11:10,069 WARN [Listener at localhost/36877] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 05:11:10,096 WARN [Listener at localhost/36877] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 05:11:10,100 WARN [Listener at localhost/36877] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 05:11:10,101 INFO [Listener at localhost/36877] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 05:11:10,107 INFO [Listener at localhost/36877] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/java.io.tmpdir/Jetty_localhost_37029_datanode____dbnr3u/webapp 2023-07-23 05:11:10,201 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3a1b66ac8cbca8bd: Processing first storage report for DS-30353731-4290-4b73-89d3-1aa78ff81a40 from datanode 2e17e9f3-f2b9-4b78-9364-f72d802852be 2023-07-23 05:11:10,201 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3a1b66ac8cbca8bd: from storage DS-30353731-4290-4b73-89d3-1aa78ff81a40 node DatanodeRegistration(127.0.0.1:37655, datanodeUuid=2e17e9f3-f2b9-4b78-9364-f72d802852be, infoPort=43369, infoSecurePort=0, ipcPort=36877, storageInfo=lv=-57;cid=testClusterID;nsid=198894400;c=1690089069606), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 05:11:10,201 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x3a1b66ac8cbca8bd: Processing first storage report for DS-46f68d0d-382c-4fba-8a82-8f3a480bdeff from datanode 2e17e9f3-f2b9-4b78-9364-f72d802852be 2023-07-23 05:11:10,201 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x3a1b66ac8cbca8bd: from storage DS-46f68d0d-382c-4fba-8a82-8f3a480bdeff node DatanodeRegistration(127.0.0.1:37655, datanodeUuid=2e17e9f3-f2b9-4b78-9364-f72d802852be, infoPort=43369, infoSecurePort=0, ipcPort=36877, storageInfo=lv=-57;cid=testClusterID;nsid=198894400;c=1690089069606), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 05:11:10,228 INFO [Listener at localhost/36877] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37029 2023-07-23 05:11:10,242 WARN [Listener at localhost/34155] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 05:11:10,367 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x75c648075c51d2a9: Processing first storage report for DS-6f243808-0fbf-4486-a19a-203bd848836b from datanode 3780d2af-7256-4639-ab06-824f61bbee2f 2023-07-23 05:11:10,367 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x75c648075c51d2a9: from storage DS-6f243808-0fbf-4486-a19a-203bd848836b node DatanodeRegistration(127.0.0.1:36537, datanodeUuid=3780d2af-7256-4639-ab06-824f61bbee2f, infoPort=45419, infoSecurePort=0, ipcPort=34155, storageInfo=lv=-57;cid=testClusterID;nsid=198894400;c=1690089069606), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 05:11:10,367 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x75c648075c51d2a9: Processing first storage report for DS-28b0cabc-4e62-4f18-aaa9-ddaccbc9b1f3 from datanode 3780d2af-7256-4639-ab06-824f61bbee2f 2023-07-23 05:11:10,367 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x75c648075c51d2a9: from storage DS-28b0cabc-4e62-4f18-aaa9-ddaccbc9b1f3 node DatanodeRegistration(127.0.0.1:36537, datanodeUuid=3780d2af-7256-4639-ab06-824f61bbee2f, infoPort=45419, infoSecurePort=0, ipcPort=34155, storageInfo=lv=-57;cid=testClusterID;nsid=198894400;c=1690089069606), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 05:11:10,469 DEBUG [Listener at localhost/34155] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13 2023-07-23 05:11:10,471 INFO [Listener at localhost/34155] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/cluster_599f895e-8c1f-63ff-627f-5d6c5c28e013/zookeeper_0, clientPort=60906, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/cluster_599f895e-8c1f-63ff-627f-5d6c5c28e013/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/cluster_599f895e-8c1f-63ff-627f-5d6c5c28e013/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-23 05:11:10,473 INFO [Listener at localhost/34155] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=60906 2023-07-23 05:11:10,473 INFO [Listener at localhost/34155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:10,474 INFO [Listener at localhost/34155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:10,493 INFO [Listener at localhost/34155] util.FSUtils(471): Created version file at hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf with version=8 2023-07-23 05:11:10,494 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/hbase-staging 2023-07-23 05:11:10,495 DEBUG [Listener at localhost/34155] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-23 05:11:10,495 DEBUG [Listener at localhost/34155] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-23 05:11:10,495 DEBUG [Listener at localhost/34155] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-23 05:11:10,495 DEBUG [Listener at localhost/34155] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-23 05:11:10,496 INFO [Listener at localhost/34155] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 05:11:10,496 INFO [Listener at localhost/34155] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:10,496 INFO [Listener at localhost/34155] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:10,496 INFO [Listener at localhost/34155] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 05:11:10,496 INFO [Listener at localhost/34155] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:10,496 INFO [Listener at localhost/34155] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 05:11:10,496 INFO [Listener at localhost/34155] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 05:11:10,497 INFO [Listener at localhost/34155] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34851 2023-07-23 05:11:10,498 INFO [Listener at localhost/34155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:10,499 INFO [Listener at localhost/34155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:10,500 INFO [Listener at localhost/34155] zookeeper.RecoverableZooKeeper(93): Process identifier=master:34851 connecting to ZooKeeper ensemble=127.0.0.1:60906 2023-07-23 05:11:10,508 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:348510x0, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 05:11:10,509 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:34851-0x1019097228a0000 connected 2023-07-23 05:11:10,522 DEBUG [Listener at localhost/34155] zookeeper.ZKUtil(164): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 05:11:10,522 DEBUG [Listener at localhost/34155] zookeeper.ZKUtil(164): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:10,523 DEBUG [Listener at localhost/34155] zookeeper.ZKUtil(164): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 05:11:10,524 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34851 2023-07-23 05:11:10,525 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34851 2023-07-23 05:11:10,525 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34851 2023-07-23 05:11:10,525 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34851 2023-07-23 05:11:10,526 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34851 2023-07-23 05:11:10,528 INFO [Listener at localhost/34155] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 05:11:10,528 INFO [Listener at localhost/34155] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 05:11:10,528 INFO [Listener at localhost/34155] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 05:11:10,528 INFO [Listener at localhost/34155] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-23 05:11:10,528 INFO [Listener at localhost/34155] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 05:11:10,529 INFO [Listener at localhost/34155] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 05:11:10,529 INFO [Listener at localhost/34155] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 05:11:10,529 INFO [Listener at localhost/34155] http.HttpServer(1146): Jetty bound to port 36935 2023-07-23 05:11:10,529 INFO [Listener at localhost/34155] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 05:11:10,531 INFO [Listener at localhost/34155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:10,532 INFO [Listener at localhost/34155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@780bfb43{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/hadoop.log.dir/,AVAILABLE} 2023-07-23 05:11:10,532 INFO [Listener at localhost/34155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:10,532 INFO [Listener at localhost/34155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2e75a497{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 05:11:10,666 INFO [Listener at localhost/34155] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 05:11:10,667 INFO [Listener at localhost/34155] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 05:11:10,668 INFO [Listener at localhost/34155] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 05:11:10,668 INFO [Listener at localhost/34155] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 05:11:10,670 INFO [Listener at localhost/34155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:10,671 INFO [Listener at localhost/34155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2e164a3a{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/java.io.tmpdir/jetty-0_0_0_0-36935-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7134538815191807039/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 05:11:10,673 INFO [Listener at localhost/34155] server.AbstractConnector(333): Started ServerConnector@78016569{HTTP/1.1, (http/1.1)}{0.0.0.0:36935} 2023-07-23 05:11:10,673 INFO [Listener at localhost/34155] server.Server(415): Started @35513ms 2023-07-23 05:11:10,673 INFO [Listener at localhost/34155] master.HMaster(444): hbase.rootdir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf, hbase.cluster.distributed=false 2023-07-23 05:11:10,689 INFO [Listener at localhost/34155] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 05:11:10,689 INFO [Listener at localhost/34155] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:10,690 INFO [Listener at localhost/34155] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:10,690 INFO [Listener at localhost/34155] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 05:11:10,690 INFO [Listener at localhost/34155] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:10,690 INFO [Listener at localhost/34155] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 05:11:10,690 INFO [Listener at localhost/34155] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 05:11:10,691 INFO [Listener at localhost/34155] ipc.NettyRpcServer(120): Bind to /172.31.14.131:42007 2023-07-23 05:11:10,691 INFO [Listener at localhost/34155] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 05:11:10,692 DEBUG [Listener at localhost/34155] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 05:11:10,692 INFO [Listener at localhost/34155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:10,694 INFO [Listener at localhost/34155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:10,695 INFO [Listener at localhost/34155] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:42007 connecting to ZooKeeper ensemble=127.0.0.1:60906 2023-07-23 05:11:10,699 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:420070x0, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 05:11:10,700 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:42007-0x1019097228a0001 connected 2023-07-23 05:11:10,700 DEBUG [Listener at localhost/34155] zookeeper.ZKUtil(164): regionserver:42007-0x1019097228a0001, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 05:11:10,700 DEBUG [Listener at localhost/34155] zookeeper.ZKUtil(164): regionserver:42007-0x1019097228a0001, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:10,701 DEBUG [Listener at localhost/34155] zookeeper.ZKUtil(164): regionserver:42007-0x1019097228a0001, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 05:11:10,701 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=42007 2023-07-23 05:11:10,701 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=42007 2023-07-23 05:11:10,702 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=42007 2023-07-23 05:11:10,702 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=42007 2023-07-23 05:11:10,702 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=42007 2023-07-23 05:11:10,704 INFO [Listener at localhost/34155] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 05:11:10,704 INFO [Listener at localhost/34155] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 05:11:10,704 INFO [Listener at localhost/34155] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 05:11:10,705 INFO [Listener at localhost/34155] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 05:11:10,705 INFO [Listener at localhost/34155] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 05:11:10,705 INFO [Listener at localhost/34155] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 05:11:10,705 INFO [Listener at localhost/34155] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 05:11:10,706 INFO [Listener at localhost/34155] http.HttpServer(1146): Jetty bound to port 37443 2023-07-23 05:11:10,706 INFO [Listener at localhost/34155] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 05:11:10,707 INFO [Listener at localhost/34155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:10,707 INFO [Listener at localhost/34155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1cc8b87a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/hadoop.log.dir/,AVAILABLE} 2023-07-23 05:11:10,708 INFO [Listener at localhost/34155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:10,708 INFO [Listener at localhost/34155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@14dd322{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 05:11:10,828 INFO [Listener at localhost/34155] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 05:11:10,829 INFO [Listener at localhost/34155] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 05:11:10,829 INFO [Listener at localhost/34155] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 05:11:10,830 INFO [Listener at localhost/34155] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 05:11:10,831 INFO [Listener at localhost/34155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:10,832 INFO [Listener at localhost/34155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@19a55b52{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/java.io.tmpdir/jetty-0_0_0_0-37443-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4360350418489892596/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:11:10,834 INFO [Listener at localhost/34155] server.AbstractConnector(333): Started ServerConnector@62fe5a63{HTTP/1.1, (http/1.1)}{0.0.0.0:37443} 2023-07-23 05:11:10,834 INFO [Listener at localhost/34155] server.Server(415): Started @35673ms 2023-07-23 05:11:10,852 INFO [Listener at localhost/34155] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 05:11:10,853 INFO [Listener at localhost/34155] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:10,853 INFO [Listener at localhost/34155] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:10,853 INFO [Listener at localhost/34155] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 05:11:10,853 INFO [Listener at localhost/34155] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:10,853 INFO [Listener at localhost/34155] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 05:11:10,854 INFO [Listener at localhost/34155] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 05:11:10,855 INFO [Listener at localhost/34155] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34197 2023-07-23 05:11:10,856 INFO [Listener at localhost/34155] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 05:11:10,863 DEBUG [Listener at localhost/34155] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 05:11:10,863 INFO [Listener at localhost/34155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:10,865 INFO [Listener at localhost/34155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:10,866 INFO [Listener at localhost/34155] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34197 connecting to ZooKeeper ensemble=127.0.0.1:60906 2023-07-23 05:11:10,870 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:341970x0, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 05:11:10,871 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34197-0x1019097228a0002 connected 2023-07-23 05:11:10,871 DEBUG [Listener at localhost/34155] zookeeper.ZKUtil(164): regionserver:34197-0x1019097228a0002, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 05:11:10,872 DEBUG [Listener at localhost/34155] zookeeper.ZKUtil(164): regionserver:34197-0x1019097228a0002, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:10,872 DEBUG [Listener at localhost/34155] zookeeper.ZKUtil(164): regionserver:34197-0x1019097228a0002, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 05:11:10,877 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34197 2023-07-23 05:11:10,877 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34197 2023-07-23 05:11:10,878 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34197 2023-07-23 05:11:10,880 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34197 2023-07-23 05:11:10,881 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34197 2023-07-23 05:11:10,883 INFO [Listener at localhost/34155] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 05:11:10,883 INFO [Listener at localhost/34155] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 05:11:10,884 INFO [Listener at localhost/34155] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 05:11:10,884 INFO [Listener at localhost/34155] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 05:11:10,884 INFO [Listener at localhost/34155] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 05:11:10,885 INFO [Listener at localhost/34155] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 05:11:10,885 INFO [Listener at localhost/34155] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 05:11:10,886 INFO [Listener at localhost/34155] http.HttpServer(1146): Jetty bound to port 43935 2023-07-23 05:11:10,886 INFO [Listener at localhost/34155] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 05:11:10,891 INFO [Listener at localhost/34155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:10,891 INFO [Listener at localhost/34155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1ac8173c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/hadoop.log.dir/,AVAILABLE} 2023-07-23 05:11:10,892 INFO [Listener at localhost/34155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:10,892 INFO [Listener at localhost/34155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7b6d1808{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 05:11:10,961 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 05:11:10,961 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 05:11:10,961 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 05:11:11,006 INFO [Listener at localhost/34155] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 05:11:11,007 INFO [Listener at localhost/34155] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 05:11:11,008 INFO [Listener at localhost/34155] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 05:11:11,008 INFO [Listener at localhost/34155] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 05:11:11,009 INFO [Listener at localhost/34155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:11,010 INFO [Listener at localhost/34155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@44267f0{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/java.io.tmpdir/jetty-0_0_0_0-43935-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3818281848269923048/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:11:11,011 INFO [Listener at localhost/34155] server.AbstractConnector(333): Started ServerConnector@82d2458{HTTP/1.1, (http/1.1)}{0.0.0.0:43935} 2023-07-23 05:11:11,011 INFO [Listener at localhost/34155] server.Server(415): Started @35851ms 2023-07-23 05:11:11,024 INFO [Listener at localhost/34155] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 05:11:11,024 INFO [Listener at localhost/34155] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:11,024 INFO [Listener at localhost/34155] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:11,025 INFO [Listener at localhost/34155] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 05:11:11,025 INFO [Listener at localhost/34155] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:11,025 INFO [Listener at localhost/34155] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 05:11:11,025 INFO [Listener at localhost/34155] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 05:11:11,026 INFO [Listener at localhost/34155] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34321 2023-07-23 05:11:11,026 INFO [Listener at localhost/34155] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 05:11:11,028 DEBUG [Listener at localhost/34155] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 05:11:11,028 INFO [Listener at localhost/34155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:11,029 INFO [Listener at localhost/34155] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:11,031 INFO [Listener at localhost/34155] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34321 connecting to ZooKeeper ensemble=127.0.0.1:60906 2023-07-23 05:11:11,034 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:343210x0, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 05:11:11,035 DEBUG [Listener at localhost/34155] zookeeper.ZKUtil(164): regionserver:343210x0, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 05:11:11,035 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34321-0x1019097228a0003 connected 2023-07-23 05:11:11,036 DEBUG [Listener at localhost/34155] zookeeper.ZKUtil(164): regionserver:34321-0x1019097228a0003, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:11,036 DEBUG [Listener at localhost/34155] zookeeper.ZKUtil(164): regionserver:34321-0x1019097228a0003, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 05:11:11,038 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34321 2023-07-23 05:11:11,039 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34321 2023-07-23 05:11:11,040 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34321 2023-07-23 05:11:11,042 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34321 2023-07-23 05:11:11,043 DEBUG [Listener at localhost/34155] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34321 2023-07-23 05:11:11,045 INFO [Listener at localhost/34155] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 05:11:11,046 INFO [Listener at localhost/34155] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 05:11:11,046 INFO [Listener at localhost/34155] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 05:11:11,047 INFO [Listener at localhost/34155] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 05:11:11,047 INFO [Listener at localhost/34155] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 05:11:11,047 INFO [Listener at localhost/34155] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 05:11:11,047 INFO [Listener at localhost/34155] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 05:11:11,048 INFO [Listener at localhost/34155] http.HttpServer(1146): Jetty bound to port 42851 2023-07-23 05:11:11,048 INFO [Listener at localhost/34155] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 05:11:11,054 INFO [Listener at localhost/34155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:11,055 INFO [Listener at localhost/34155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1782cf7b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/hadoop.log.dir/,AVAILABLE} 2023-07-23 05:11:11,055 INFO [Listener at localhost/34155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:11,055 INFO [Listener at localhost/34155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6a2ebbcc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 05:11:11,183 INFO [Listener at localhost/34155] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 05:11:11,184 INFO [Listener at localhost/34155] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 05:11:11,184 INFO [Listener at localhost/34155] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 05:11:11,185 INFO [Listener at localhost/34155] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 05:11:11,187 INFO [Listener at localhost/34155] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:11,188 INFO [Listener at localhost/34155] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5c5ad983{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/java.io.tmpdir/jetty-0_0_0_0-42851-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3656101263193677818/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:11:11,189 INFO [Listener at localhost/34155] server.AbstractConnector(333): Started ServerConnector@789ac99f{HTTP/1.1, (http/1.1)}{0.0.0.0:42851} 2023-07-23 05:11:11,189 INFO [Listener at localhost/34155] server.Server(415): Started @36029ms 2023-07-23 05:11:11,191 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 05:11:11,196 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@683b1fa6{HTTP/1.1, (http/1.1)}{0.0.0.0:34739} 2023-07-23 05:11:11,196 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @36036ms 2023-07-23 05:11:11,196 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,34851,1690089070495 2023-07-23 05:11:11,197 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 05:11:11,197 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,34851,1690089070495 2023-07-23 05:11:11,200 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:34197-0x1019097228a0002, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 05:11:11,200 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 05:11:11,200 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:34321-0x1019097228a0003, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 05:11:11,200 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:42007-0x1019097228a0001, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 05:11:11,201 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:11,204 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 05:11:11,206 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,34851,1690089070495 from backup master directory 2023-07-23 05:11:11,206 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 05:11:11,207 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,34851,1690089070495 2023-07-23 05:11:11,207 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 05:11:11,207 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 05:11:11,207 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,34851,1690089070495 2023-07-23 05:11:11,225 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/hbase.id with ID: 88b0f9d6-b791-497a-8b9f-f0e3cc121ad7 2023-07-23 05:11:11,237 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:11,240 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:11,253 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x051fcfe4 to 127.0.0.1:60906 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:11:11,257 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@26cbe1fa, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:11:11,257 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 05:11:11,258 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-23 05:11:11,259 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 05:11:11,261 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/MasterData/data/master/store-tmp 2023-07-23 05:11:11,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:11,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 05:11:11,275 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:11,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:11,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 05:11:11,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:11,275 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:11,275 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 05:11:11,276 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/MasterData/WALs/jenkins-hbase4.apache.org,34851,1690089070495 2023-07-23 05:11:11,279 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34851%2C1690089070495, suffix=, logDir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/MasterData/WALs/jenkins-hbase4.apache.org,34851,1690089070495, archiveDir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/MasterData/oldWALs, maxLogs=10 2023-07-23 05:11:11,313 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37655,DS-30353731-4290-4b73-89d3-1aa78ff81a40,DISK] 2023-07-23 05:11:11,313 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36537,DS-6f243808-0fbf-4486-a19a-203bd848836b,DISK] 2023-07-23 05:11:11,316 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32981,DS-5cb00f08-f655-4094-b517-e0140a3e44b9,DISK] 2023-07-23 05:11:11,322 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/MasterData/WALs/jenkins-hbase4.apache.org,34851,1690089070495/jenkins-hbase4.apache.org%2C34851%2C1690089070495.1690089071279 2023-07-23 05:11:11,323 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37655,DS-30353731-4290-4b73-89d3-1aa78ff81a40,DISK], DatanodeInfoWithStorage[127.0.0.1:36537,DS-6f243808-0fbf-4486-a19a-203bd848836b,DISK], DatanodeInfoWithStorage[127.0.0.1:32981,DS-5cb00f08-f655-4094-b517-e0140a3e44b9,DISK]] 2023-07-23 05:11:11,323 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:11:11,323 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:11,323 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 05:11:11,323 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 05:11:11,329 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-23 05:11:11,330 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-23 05:11:11,331 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-23 05:11:11,331 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:11,332 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 05:11:11,333 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 05:11:11,336 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 05:11:11,342 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:11:11,343 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10652408320, jitterRate=-0.007917165756225586}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:11:11,343 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 05:11:11,346 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-23 05:11:11,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-23 05:11:11,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-23 05:11:11,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-23 05:11:11,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-23 05:11:11,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-23 05:11:11,349 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-23 05:11:11,355 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-23 05:11:11,357 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-23 05:11:11,358 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-23 05:11:11,358 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-23 05:11:11,358 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-23 05:11:11,362 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:11,362 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-23 05:11:11,363 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-23 05:11:11,364 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-23 05:11:11,365 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:34197-0x1019097228a0002, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:11,366 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:34321-0x1019097228a0003, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:11,366 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:42007-0x1019097228a0001, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:11,366 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:11,366 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:11,366 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,34851,1690089070495, sessionid=0x1019097228a0000, setting cluster-up flag (Was=false) 2023-07-23 05:11:11,370 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:11,374 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-23 05:11:11,375 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34851,1690089070495 2023-07-23 05:11:11,379 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:11,382 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-23 05:11:11,383 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34851,1690089070495 2023-07-23 05:11:11,384 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.hbase-snapshot/.tmp 2023-07-23 05:11:11,387 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-23 05:11:11,387 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-23 05:11:11,388 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-23 05:11:11,388 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34851,1690089070495] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 05:11:11,389 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-23 05:11:11,389 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-23 05:11:11,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-23 05:11:11,392 INFO [RS:0;jenkins-hbase4:42007] regionserver.HRegionServer(951): ClusterId : 88b0f9d6-b791-497a-8b9f-f0e3cc121ad7 2023-07-23 05:11:11,395 DEBUG [RS:0;jenkins-hbase4:42007] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 05:11:11,395 INFO [RS:1;jenkins-hbase4:34197] regionserver.HRegionServer(951): ClusterId : 88b0f9d6-b791-497a-8b9f-f0e3cc121ad7 2023-07-23 05:11:11,397 DEBUG [RS:1;jenkins-hbase4:34197] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 05:11:11,397 INFO [RS:2;jenkins-hbase4:34321] regionserver.HRegionServer(951): ClusterId : 88b0f9d6-b791-497a-8b9f-f0e3cc121ad7 2023-07-23 05:11:11,397 DEBUG [RS:2;jenkins-hbase4:34321] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 05:11:11,400 DEBUG [RS:0;jenkins-hbase4:42007] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 05:11:11,400 DEBUG [RS:0;jenkins-hbase4:42007] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 05:11:11,400 DEBUG [RS:2;jenkins-hbase4:34321] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 05:11:11,401 DEBUG [RS:2;jenkins-hbase4:34321] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 05:11:11,400 DEBUG [RS:1;jenkins-hbase4:34197] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 05:11:11,401 DEBUG [RS:1;jenkins-hbase4:34197] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 05:11:11,405 DEBUG [RS:0;jenkins-hbase4:42007] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 05:11:11,408 DEBUG [RS:2;jenkins-hbase4:34321] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 05:11:11,409 DEBUG [RS:2;jenkins-hbase4:34321] zookeeper.ReadOnlyZKClient(139): Connect 0x244bccb4 to 127.0.0.1:60906 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:11:11,409 DEBUG [RS:0;jenkins-hbase4:42007] zookeeper.ReadOnlyZKClient(139): Connect 0x0a5b8acc to 127.0.0.1:60906 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:11:11,409 DEBUG [RS:1;jenkins-hbase4:34197] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 05:11:11,412 DEBUG [RS:1;jenkins-hbase4:34197] zookeeper.ReadOnlyZKClient(139): Connect 0x662c945b to 127.0.0.1:60906 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:11:11,414 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 05:11:11,414 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 05:11:11,415 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 05:11:11,415 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 05:11:11,417 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 05:11:11,417 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 05:11:11,417 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 05:11:11,417 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 05:11:11,417 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-23 05:11:11,417 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,417 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 05:11:11,417 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,428 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690089101428 2023-07-23 05:11:11,429 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-23 05:11:11,429 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-23 05:11:11,429 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-23 05:11:11,429 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-23 05:11:11,429 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-23 05:11:11,429 DEBUG [RS:2;jenkins-hbase4:34321] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a8c364d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:11:11,429 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 05:11:11,429 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-23 05:11:11,429 DEBUG [RS:2;jenkins-hbase4:34321] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2eb3a7c4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 05:11:11,429 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-23 05:11:11,429 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,430 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-23 05:11:11,430 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-23 05:11:11,430 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-23 05:11:11,435 DEBUG [RS:0;jenkins-hbase4:42007] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@a3e1bc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:11:11,435 DEBUG [RS:1;jenkins-hbase4:34197] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@68eb66ee, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:11:11,435 DEBUG [RS:0;jenkins-hbase4:42007] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@45a16fcc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 05:11:11,435 DEBUG [RS:1;jenkins-hbase4:34197] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6cd4ff2b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 05:11:11,436 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-23 05:11:11,436 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-23 05:11:11,439 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 05:11:11,440 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690089071436,5,FailOnTimeoutGroup] 2023-07-23 05:11:11,440 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690089071440,5,FailOnTimeoutGroup] 2023-07-23 05:11:11,440 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,440 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-23 05:11:11,440 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,441 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,447 DEBUG [RS:0;jenkins-hbase4:42007] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:42007 2023-07-23 05:11:11,447 DEBUG [RS:2;jenkins-hbase4:34321] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:34321 2023-07-23 05:11:11,447 INFO [RS:0;jenkins-hbase4:42007] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 05:11:11,447 INFO [RS:0;jenkins-hbase4:42007] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 05:11:11,447 INFO [RS:2;jenkins-hbase4:34321] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 05:11:11,447 INFO [RS:2;jenkins-hbase4:34321] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 05:11:11,447 DEBUG [RS:0;jenkins-hbase4:42007] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 05:11:11,447 DEBUG [RS:2;jenkins-hbase4:34321] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 05:11:11,448 INFO [RS:0;jenkins-hbase4:42007] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34851,1690089070495 with isa=jenkins-hbase4.apache.org/172.31.14.131:42007, startcode=1690089070689 2023-07-23 05:11:11,448 INFO [RS:2;jenkins-hbase4:34321] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34851,1690089070495 with isa=jenkins-hbase4.apache.org/172.31.14.131:34321, startcode=1690089071024 2023-07-23 05:11:11,448 DEBUG [RS:0;jenkins-hbase4:42007] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 05:11:11,448 DEBUG [RS:2;jenkins-hbase4:34321] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 05:11:11,449 DEBUG [RS:1;jenkins-hbase4:34197] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:34197 2023-07-23 05:11:11,449 INFO [RS:1;jenkins-hbase4:34197] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 05:11:11,449 INFO [RS:1;jenkins-hbase4:34197] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 05:11:11,449 DEBUG [RS:1;jenkins-hbase4:34197] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 05:11:11,450 INFO [RS:1;jenkins-hbase4:34197] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34851,1690089070495 with isa=jenkins-hbase4.apache.org/172.31.14.131:34197, startcode=1690089070852 2023-07-23 05:11:11,450 DEBUG [RS:1;jenkins-hbase4:34197] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 05:11:11,455 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49907, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 05:11:11,456 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54467, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 05:11:11,457 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44589, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 05:11:11,457 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34851] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34321,1690089071024 2023-07-23 05:11:11,458 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34851,1690089070495] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 05:11:11,458 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34851,1690089070495] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-23 05:11:11,459 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34851] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34197,1690089070852 2023-07-23 05:11:11,459 DEBUG [RS:2;jenkins-hbase4:34321] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf 2023-07-23 05:11:11,459 DEBUG [RS:2;jenkins-hbase4:34321] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44369 2023-07-23 05:11:11,459 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34851,1690089070495] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 05:11:11,459 DEBUG [RS:2;jenkins-hbase4:34321] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36935 2023-07-23 05:11:11,459 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34851] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,42007,1690089070689 2023-07-23 05:11:11,459 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34851,1690089070495] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-23 05:11:11,459 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34851,1690089070495] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 05:11:11,459 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34851,1690089070495] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-23 05:11:11,459 DEBUG [RS:0;jenkins-hbase4:42007] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf 2023-07-23 05:11:11,459 DEBUG [RS:0;jenkins-hbase4:42007] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44369 2023-07-23 05:11:11,459 DEBUG [RS:0;jenkins-hbase4:42007] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36935 2023-07-23 05:11:11,460 DEBUG [RS:1;jenkins-hbase4:34197] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf 2023-07-23 05:11:11,460 DEBUG [RS:1;jenkins-hbase4:34197] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:44369 2023-07-23 05:11:11,460 DEBUG [RS:1;jenkins-hbase4:34197] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36935 2023-07-23 05:11:11,460 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:11,466 DEBUG [RS:2;jenkins-hbase4:34321] zookeeper.ZKUtil(162): regionserver:34321-0x1019097228a0003, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34321,1690089071024 2023-07-23 05:11:11,467 WARN [RS:2;jenkins-hbase4:34321] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 05:11:11,467 INFO [RS:2;jenkins-hbase4:34321] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 05:11:11,467 DEBUG [RS:2;jenkins-hbase4:34321] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/WALs/jenkins-hbase4.apache.org,34321,1690089071024 2023-07-23 05:11:11,470 DEBUG [RS:0;jenkins-hbase4:42007] zookeeper.ZKUtil(162): regionserver:42007-0x1019097228a0001, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42007,1690089070689 2023-07-23 05:11:11,471 DEBUG [RS:1;jenkins-hbase4:34197] zookeeper.ZKUtil(162): regionserver:34197-0x1019097228a0002, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34197,1690089070852 2023-07-23 05:11:11,471 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34321,1690089071024] 2023-07-23 05:11:11,474 WARN [RS:1;jenkins-hbase4:34197] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 05:11:11,473 WARN [RS:0;jenkins-hbase4:42007] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 05:11:11,474 INFO [RS:1;jenkins-hbase4:34197] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 05:11:11,474 INFO [RS:0;jenkins-hbase4:42007] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 05:11:11,475 DEBUG [RS:1;jenkins-hbase4:34197] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/WALs/jenkins-hbase4.apache.org,34197,1690089070852 2023-07-23 05:11:11,474 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34197,1690089070852] 2023-07-23 05:11:11,475 DEBUG [RS:0;jenkins-hbase4:42007] regionserver.HRegionServer(1948): logDir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/WALs/jenkins-hbase4.apache.org,42007,1690089070689 2023-07-23 05:11:11,476 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,42007,1690089070689] 2023-07-23 05:11:11,481 DEBUG [RS:2;jenkins-hbase4:34321] zookeeper.ZKUtil(162): regionserver:34321-0x1019097228a0003, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34321,1690089071024 2023-07-23 05:11:11,486 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 05:11:11,486 DEBUG [RS:2;jenkins-hbase4:34321] zookeeper.ZKUtil(162): regionserver:34321-0x1019097228a0003, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34197,1690089070852 2023-07-23 05:11:11,486 DEBUG [RS:0;jenkins-hbase4:42007] zookeeper.ZKUtil(162): regionserver:42007-0x1019097228a0001, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34321,1690089071024 2023-07-23 05:11:11,486 DEBUG [RS:1;jenkins-hbase4:34197] zookeeper.ZKUtil(162): regionserver:34197-0x1019097228a0002, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34321,1690089071024 2023-07-23 05:11:11,486 DEBUG [RS:2;jenkins-hbase4:34321] zookeeper.ZKUtil(162): regionserver:34321-0x1019097228a0003, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42007,1690089070689 2023-07-23 05:11:11,487 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 05:11:11,487 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf 2023-07-23 05:11:11,490 DEBUG [RS:2;jenkins-hbase4:34321] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 05:11:11,490 INFO [RS:2;jenkins-hbase4:34321] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 05:11:11,490 DEBUG [RS:0;jenkins-hbase4:42007] zookeeper.ZKUtil(162): regionserver:42007-0x1019097228a0001, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34197,1690089070852 2023-07-23 05:11:11,490 DEBUG [RS:1;jenkins-hbase4:34197] zookeeper.ZKUtil(162): regionserver:34197-0x1019097228a0002, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34197,1690089070852 2023-07-23 05:11:11,491 DEBUG [RS:0;jenkins-hbase4:42007] zookeeper.ZKUtil(162): regionserver:42007-0x1019097228a0001, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42007,1690089070689 2023-07-23 05:11:11,491 DEBUG [RS:1;jenkins-hbase4:34197] zookeeper.ZKUtil(162): regionserver:34197-0x1019097228a0002, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,42007,1690089070689 2023-07-23 05:11:11,492 DEBUG [RS:0;jenkins-hbase4:42007] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 05:11:11,492 INFO [RS:0;jenkins-hbase4:42007] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 05:11:11,494 DEBUG [RS:1;jenkins-hbase4:34197] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 05:11:11,495 INFO [RS:1;jenkins-hbase4:34197] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 05:11:11,496 INFO [RS:0;jenkins-hbase4:42007] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 05:11:11,497 INFO [RS:2;jenkins-hbase4:34321] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 05:11:11,500 INFO [RS:0;jenkins-hbase4:42007] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 05:11:11,500 INFO [RS:0;jenkins-hbase4:42007] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,505 INFO [RS:1;jenkins-hbase4:34197] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 05:11:11,510 INFO [RS:0;jenkins-hbase4:42007] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 05:11:11,519 INFO [RS:2;jenkins-hbase4:34321] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 05:11:11,519 INFO [RS:2;jenkins-hbase4:34321] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,520 INFO [RS:1;jenkins-hbase4:34197] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 05:11:11,520 INFO [RS:1;jenkins-hbase4:34197] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,520 INFO [RS:1;jenkins-hbase4:34197] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 05:11:11,521 INFO [RS:2;jenkins-hbase4:34321] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 05:11:11,523 INFO [RS:0;jenkins-hbase4:42007] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,523 INFO [RS:2;jenkins-hbase4:34321] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,523 INFO [RS:1;jenkins-hbase4:34197] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,523 DEBUG [RS:0;jenkins-hbase4:42007] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,524 DEBUG [RS:0;jenkins-hbase4:42007] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,524 DEBUG [RS:1;jenkins-hbase4:34197] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,524 DEBUG [RS:0;jenkins-hbase4:42007] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,524 DEBUG [RS:1;jenkins-hbase4:34197] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,524 DEBUG [RS:0;jenkins-hbase4:42007] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,524 DEBUG [RS:2;jenkins-hbase4:34321] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,525 DEBUG [RS:0;jenkins-hbase4:42007] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,525 DEBUG [RS:2;jenkins-hbase4:34321] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,525 DEBUG [RS:0;jenkins-hbase4:42007] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 05:11:11,524 DEBUG [RS:1;jenkins-hbase4:34197] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,525 DEBUG [RS:2;jenkins-hbase4:34321] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,525 DEBUG [RS:1;jenkins-hbase4:34197] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,525 DEBUG [RS:2;jenkins-hbase4:34321] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,525 DEBUG [RS:1;jenkins-hbase4:34197] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,525 DEBUG [RS:2;jenkins-hbase4:34321] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,525 DEBUG [RS:1;jenkins-hbase4:34197] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 05:11:11,525 DEBUG [RS:2;jenkins-hbase4:34321] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 05:11:11,525 DEBUG [RS:1;jenkins-hbase4:34197] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,525 DEBUG [RS:2;jenkins-hbase4:34321] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,525 DEBUG [RS:0;jenkins-hbase4:42007] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,525 DEBUG [RS:1;jenkins-hbase4:34197] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,525 DEBUG [RS:2;jenkins-hbase4:34321] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,525 DEBUG [RS:1;jenkins-hbase4:34197] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,525 DEBUG [RS:2;jenkins-hbase4:34321] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,525 DEBUG [RS:1;jenkins-hbase4:34197] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,525 DEBUG [RS:2;jenkins-hbase4:34321] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,525 DEBUG [RS:0;jenkins-hbase4:42007] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,526 DEBUG [RS:0;jenkins-hbase4:42007] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,526 DEBUG [RS:0;jenkins-hbase4:42007] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:11,534 INFO [RS:1;jenkins-hbase4:34197] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,534 INFO [RS:1;jenkins-hbase4:34197] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,535 INFO [RS:1;jenkins-hbase4:34197] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,535 INFO [RS:1;jenkins-hbase4:34197] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,535 INFO [RS:0;jenkins-hbase4:42007] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,535 INFO [RS:2;jenkins-hbase4:34321] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,535 INFO [RS:2;jenkins-hbase4:34321] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,535 INFO [RS:2;jenkins-hbase4:34321] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,535 INFO [RS:2;jenkins-hbase4:34321] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,535 INFO [RS:0;jenkins-hbase4:42007] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,535 INFO [RS:0;jenkins-hbase4:42007] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,535 INFO [RS:0;jenkins-hbase4:42007] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,542 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:11,544 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 05:11:11,545 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/info 2023-07-23 05:11:11,546 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 05:11:11,549 INFO [RS:2;jenkins-hbase4:34321] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 05:11:11,550 INFO [RS:2;jenkins-hbase4:34321] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34321,1690089071024-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,550 INFO [RS:1;jenkins-hbase4:34197] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 05:11:11,550 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:11,551 INFO [RS:1;jenkins-hbase4:34197] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34197,1690089070852-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,551 INFO [RS:0;jenkins-hbase4:42007] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 05:11:11,551 INFO [RS:0;jenkins-hbase4:42007] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,42007,1690089070689-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,554 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 05:11:11,556 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/rep_barrier 2023-07-23 05:11:11,557 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 05:11:11,557 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:11,557 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 05:11:11,559 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/table 2023-07-23 05:11:11,559 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 05:11:11,560 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:11,565 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740 2023-07-23 05:11:11,565 INFO [RS:2;jenkins-hbase4:34321] regionserver.Replication(203): jenkins-hbase4.apache.org,34321,1690089071024 started 2023-07-23 05:11:11,566 INFO [RS:2;jenkins-hbase4:34321] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34321,1690089071024, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34321, sessionid=0x1019097228a0003 2023-07-23 05:11:11,566 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740 2023-07-23 05:11:11,569 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 05:11:11,572 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 05:11:11,582 INFO [RS:1;jenkins-hbase4:34197] regionserver.Replication(203): jenkins-hbase4.apache.org,34197,1690089070852 started 2023-07-23 05:11:11,582 DEBUG [RS:2;jenkins-hbase4:34321] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 05:11:11,583 DEBUG [RS:2;jenkins-hbase4:34321] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34321,1690089071024 2023-07-23 05:11:11,583 DEBUG [RS:2;jenkins-hbase4:34321] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34321,1690089071024' 2023-07-23 05:11:11,583 DEBUG [RS:2;jenkins-hbase4:34321] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 05:11:11,583 INFO [RS:1;jenkins-hbase4:34197] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34197,1690089070852, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34197, sessionid=0x1019097228a0002 2023-07-23 05:11:11,583 DEBUG [RS:1;jenkins-hbase4:34197] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 05:11:11,583 DEBUG [RS:1;jenkins-hbase4:34197] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34197,1690089070852 2023-07-23 05:11:11,583 DEBUG [RS:1;jenkins-hbase4:34197] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34197,1690089070852' 2023-07-23 05:11:11,583 DEBUG [RS:1;jenkins-hbase4:34197] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 05:11:11,583 DEBUG [RS:2;jenkins-hbase4:34321] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 05:11:11,584 DEBUG [RS:1;jenkins-hbase4:34197] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 05:11:11,589 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:11:11,589 DEBUG [RS:1;jenkins-hbase4:34197] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 05:11:11,590 DEBUG [RS:2;jenkins-hbase4:34321] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 05:11:11,590 DEBUG [RS:1;jenkins-hbase4:34197] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 05:11:11,590 DEBUG [RS:2;jenkins-hbase4:34321] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 05:11:11,590 DEBUG [RS:1;jenkins-hbase4:34197] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34197,1690089070852 2023-07-23 05:11:11,590 DEBUG [RS:2;jenkins-hbase4:34321] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34321,1690089071024 2023-07-23 05:11:11,590 DEBUG [RS:1;jenkins-hbase4:34197] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34197,1690089070852' 2023-07-23 05:11:11,590 DEBUG [RS:1;jenkins-hbase4:34197] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 05:11:11,590 DEBUG [RS:2;jenkins-hbase4:34321] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34321,1690089071024' 2023-07-23 05:11:11,590 DEBUG [RS:2;jenkins-hbase4:34321] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 05:11:11,590 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11729205120, jitterRate=0.09236735105514526}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 05:11:11,590 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 05:11:11,590 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 05:11:11,591 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 05:11:11,591 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 05:11:11,591 DEBUG [RS:2;jenkins-hbase4:34321] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 05:11:11,591 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 05:11:11,591 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 05:11:11,591 DEBUG [RS:1;jenkins-hbase4:34197] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 05:11:11,591 DEBUG [RS:2;jenkins-hbase4:34321] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 05:11:11,591 INFO [RS:2;jenkins-hbase4:34321] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-23 05:11:11,591 DEBUG [RS:1;jenkins-hbase4:34197] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 05:11:11,591 INFO [RS:1;jenkins-hbase4:34197] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-23 05:11:11,594 INFO [RS:1;jenkins-hbase4:34197] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,594 INFO [RS:2;jenkins-hbase4:34321] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,596 DEBUG [RS:1;jenkins-hbase4:34197] zookeeper.ZKUtil(398): regionserver:34197-0x1019097228a0002, quorum=127.0.0.1:60906, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-23 05:11:11,596 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 05:11:11,597 INFO [RS:1;jenkins-hbase4:34197] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-23 05:11:11,597 DEBUG [RS:2;jenkins-hbase4:34321] zookeeper.ZKUtil(398): regionserver:34321-0x1019097228a0003, quorum=127.0.0.1:60906, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-23 05:11:11,597 INFO [RS:2;jenkins-hbase4:34321] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-23 05:11:11,597 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 05:11:11,597 INFO [RS:0;jenkins-hbase4:42007] regionserver.Replication(203): jenkins-hbase4.apache.org,42007,1690089070689 started 2023-07-23 05:11:11,597 INFO [RS:1;jenkins-hbase4:34197] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,597 INFO [RS:2;jenkins-hbase4:34321] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,597 INFO [RS:0;jenkins-hbase4:42007] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,42007,1690089070689, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:42007, sessionid=0x1019097228a0001 2023-07-23 05:11:11,598 DEBUG [RS:0;jenkins-hbase4:42007] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 05:11:11,598 INFO [RS:2;jenkins-hbase4:34321] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,598 DEBUG [RS:0;jenkins-hbase4:42007] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,42007,1690089070689 2023-07-23 05:11:11,598 DEBUG [RS:0;jenkins-hbase4:42007] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42007,1690089070689' 2023-07-23 05:11:11,598 DEBUG [RS:0;jenkins-hbase4:42007] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 05:11:11,598 INFO [RS:1;jenkins-hbase4:34197] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,599 DEBUG [RS:0;jenkins-hbase4:42007] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 05:11:11,604 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 05:11:11,604 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-23 05:11:11,604 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-23 05:11:11,616 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-23 05:11:11,617 DEBUG [RS:0;jenkins-hbase4:42007] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 05:11:11,617 DEBUG [RS:0;jenkins-hbase4:42007] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 05:11:11,617 DEBUG [RS:0;jenkins-hbase4:42007] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,42007,1690089070689 2023-07-23 05:11:11,617 DEBUG [RS:0;jenkins-hbase4:42007] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,42007,1690089070689' 2023-07-23 05:11:11,617 DEBUG [RS:0;jenkins-hbase4:42007] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 05:11:11,617 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-23 05:11:11,618 DEBUG [RS:0;jenkins-hbase4:42007] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 05:11:11,619 DEBUG [RS:0;jenkins-hbase4:42007] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 05:11:11,619 INFO [RS:0;jenkins-hbase4:42007] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-23 05:11:11,619 INFO [RS:0;jenkins-hbase4:42007] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,620 DEBUG [RS:0;jenkins-hbase4:42007] zookeeper.ZKUtil(398): regionserver:42007-0x1019097228a0001, quorum=127.0.0.1:60906, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-23 05:11:11,620 INFO [RS:0;jenkins-hbase4:42007] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-23 05:11:11,620 INFO [RS:0;jenkins-hbase4:42007] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,620 INFO [RS:0;jenkins-hbase4:42007] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,702 INFO [RS:2;jenkins-hbase4:34321] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34321%2C1690089071024, suffix=, logDir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/WALs/jenkins-hbase4.apache.org,34321,1690089071024, archiveDir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/oldWALs, maxLogs=32 2023-07-23 05:11:11,702 INFO [RS:1;jenkins-hbase4:34197] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34197%2C1690089070852, suffix=, logDir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/WALs/jenkins-hbase4.apache.org,34197,1690089070852, archiveDir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/oldWALs, maxLogs=32 2023-07-23 05:11:11,724 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37655,DS-30353731-4290-4b73-89d3-1aa78ff81a40,DISK] 2023-07-23 05:11:11,724 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32981,DS-5cb00f08-f655-4094-b517-e0140a3e44b9,DISK] 2023-07-23 05:11:11,724 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36537,DS-6f243808-0fbf-4486-a19a-203bd848836b,DISK] 2023-07-23 05:11:11,726 INFO [RS:0;jenkins-hbase4:42007] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C42007%2C1690089070689, suffix=, logDir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/WALs/jenkins-hbase4.apache.org,42007,1690089070689, archiveDir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/oldWALs, maxLogs=32 2023-07-23 05:11:11,726 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36537,DS-6f243808-0fbf-4486-a19a-203bd848836b,DISK] 2023-07-23 05:11:11,726 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32981,DS-5cb00f08-f655-4094-b517-e0140a3e44b9,DISK] 2023-07-23 05:11:11,731 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37655,DS-30353731-4290-4b73-89d3-1aa78ff81a40,DISK] 2023-07-23 05:11:11,739 INFO [RS:1;jenkins-hbase4:34197] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/WALs/jenkins-hbase4.apache.org,34197,1690089070852/jenkins-hbase4.apache.org%2C34197%2C1690089070852.1690089071704 2023-07-23 05:11:11,739 INFO [RS:2;jenkins-hbase4:34321] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/WALs/jenkins-hbase4.apache.org,34321,1690089071024/jenkins-hbase4.apache.org%2C34321%2C1690089071024.1690089071704 2023-07-23 05:11:11,740 DEBUG [RS:1;jenkins-hbase4:34197] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37655,DS-30353731-4290-4b73-89d3-1aa78ff81a40,DISK], DatanodeInfoWithStorage[127.0.0.1:32981,DS-5cb00f08-f655-4094-b517-e0140a3e44b9,DISK], DatanodeInfoWithStorage[127.0.0.1:36537,DS-6f243808-0fbf-4486-a19a-203bd848836b,DISK]] 2023-07-23 05:11:11,740 DEBUG [RS:2;jenkins-hbase4:34321] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32981,DS-5cb00f08-f655-4094-b517-e0140a3e44b9,DISK], DatanodeInfoWithStorage[127.0.0.1:36537,DS-6f243808-0fbf-4486-a19a-203bd848836b,DISK], DatanodeInfoWithStorage[127.0.0.1:37655,DS-30353731-4290-4b73-89d3-1aa78ff81a40,DISK]] 2023-07-23 05:11:11,745 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36537,DS-6f243808-0fbf-4486-a19a-203bd848836b,DISK] 2023-07-23 05:11:11,749 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37655,DS-30353731-4290-4b73-89d3-1aa78ff81a40,DISK] 2023-07-23 05:11:11,749 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32981,DS-5cb00f08-f655-4094-b517-e0140a3e44b9,DISK] 2023-07-23 05:11:11,752 INFO [RS:0;jenkins-hbase4:42007] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/WALs/jenkins-hbase4.apache.org,42007,1690089070689/jenkins-hbase4.apache.org%2C42007%2C1690089070689.1690089071727 2023-07-23 05:11:11,753 DEBUG [RS:0;jenkins-hbase4:42007] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36537,DS-6f243808-0fbf-4486-a19a-203bd848836b,DISK], DatanodeInfoWithStorage[127.0.0.1:32981,DS-5cb00f08-f655-4094-b517-e0140a3e44b9,DISK], DatanodeInfoWithStorage[127.0.0.1:37655,DS-30353731-4290-4b73-89d3-1aa78ff81a40,DISK]] 2023-07-23 05:11:11,768 DEBUG [jenkins-hbase4:34851] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-23 05:11:11,768 DEBUG [jenkins-hbase4:34851] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:11:11,768 DEBUG [jenkins-hbase4:34851] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:11:11,768 DEBUG [jenkins-hbase4:34851] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:11:11,768 DEBUG [jenkins-hbase4:34851] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:11:11,768 DEBUG [jenkins-hbase4:34851] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:11:11,769 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34321,1690089071024, state=OPENING 2023-07-23 05:11:11,770 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-23 05:11:11,771 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:11,772 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34321,1690089071024}] 2023-07-23 05:11:11,772 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 05:11:11,927 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34321,1690089071024 2023-07-23 05:11:11,927 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 05:11:11,928 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46838, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 05:11:11,934 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-23 05:11:11,934 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 05:11:11,935 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34321%2C1690089071024.meta, suffix=.meta, logDir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/WALs/jenkins-hbase4.apache.org,34321,1690089071024, archiveDir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/oldWALs, maxLogs=32 2023-07-23 05:11:11,950 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32981,DS-5cb00f08-f655-4094-b517-e0140a3e44b9,DISK] 2023-07-23 05:11:11,950 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36537,DS-6f243808-0fbf-4486-a19a-203bd848836b,DISK] 2023-07-23 05:11:11,950 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37655,DS-30353731-4290-4b73-89d3-1aa78ff81a40,DISK] 2023-07-23 05:11:11,952 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/WALs/jenkins-hbase4.apache.org,34321,1690089071024/jenkins-hbase4.apache.org%2C34321%2C1690089071024.meta.1690089071936.meta 2023-07-23 05:11:11,953 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:37655,DS-30353731-4290-4b73-89d3-1aa78ff81a40,DISK], DatanodeInfoWithStorage[127.0.0.1:32981,DS-5cb00f08-f655-4094-b517-e0140a3e44b9,DISK], DatanodeInfoWithStorage[127.0.0.1:36537,DS-6f243808-0fbf-4486-a19a-203bd848836b,DISK]] 2023-07-23 05:11:11,953 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:11:11,953 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 05:11:11,953 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-23 05:11:11,953 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-23 05:11:11,953 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-23 05:11:11,953 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:11,954 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-23 05:11:11,954 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-23 05:11:11,955 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 05:11:11,956 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/info 2023-07-23 05:11:11,956 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/info 2023-07-23 05:11:11,956 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 05:11:11,957 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:11,957 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 05:11:11,958 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/rep_barrier 2023-07-23 05:11:11,958 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/rep_barrier 2023-07-23 05:11:11,958 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 05:11:11,959 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:11,959 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 05:11:11,960 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/table 2023-07-23 05:11:11,960 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/table 2023-07-23 05:11:11,960 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 05:11:11,960 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:11,961 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740 2023-07-23 05:11:11,962 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740 2023-07-23 05:11:11,964 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 05:11:11,966 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 05:11:11,967 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10752638400, jitterRate=0.0014174878597259521}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 05:11:11,967 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 05:11:11,968 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690089071927 2023-07-23 05:11:11,972 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-23 05:11:11,973 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-23 05:11:11,973 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34321,1690089071024, state=OPEN 2023-07-23 05:11:11,975 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 05:11:11,975 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 05:11:11,977 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-23 05:11:11,977 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34321,1690089071024 in 203 msec 2023-07-23 05:11:11,978 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-23 05:11:11,978 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 373 msec 2023-07-23 05:11:11,980 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 590 msec 2023-07-23 05:11:11,980 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690089071980, completionTime=-1 2023-07-23 05:11:11,980 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-23 05:11:11,980 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-23 05:11:11,983 DEBUG [hconnection-0x61f6e29-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 05:11:11,984 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46844, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 05:11:11,986 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-23 05:11:11,986 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690089131986 2023-07-23 05:11:11,986 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690089191986 2023-07-23 05:11:11,986 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 6 msec 2023-07-23 05:11:11,991 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34851,1690089070495-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,991 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34851,1690089070495-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,991 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34851,1690089070495-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,991 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:34851, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,991 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:11,991 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-23 05:11:11,992 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 05:11:11,992 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-23 05:11:11,992 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-23 05:11:11,994 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 05:11:11,994 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 05:11:11,996 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/hbase/namespace/a481e3d53b99e7162f967c69848d4023 2023-07-23 05:11:11,996 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/hbase/namespace/a481e3d53b99e7162f967c69848d4023 empty. 2023-07-23 05:11:11,997 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/hbase/namespace/a481e3d53b99e7162f967c69848d4023 2023-07-23 05:11:11,997 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-23 05:11:12,010 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34851,1690089070495] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 05:11:12,011 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-23 05:11:12,012 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34851,1690089070495] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-23 05:11:12,013 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 05:11:12,013 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => a481e3d53b99e7162f967c69848d4023, NAME => 'hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp 2023-07-23 05:11:12,014 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 05:11:12,016 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/hbase/rsgroup/5a7c8ce13557d67a690695b1b7e5f1b9 2023-07-23 05:11:12,016 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/hbase/rsgroup/5a7c8ce13557d67a690695b1b7e5f1b9 empty. 2023-07-23 05:11:12,017 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/hbase/rsgroup/5a7c8ce13557d67a690695b1b7e5f1b9 2023-07-23 05:11:12,017 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-23 05:11:12,026 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:12,026 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing a481e3d53b99e7162f967c69848d4023, disabling compactions & flushes 2023-07-23 05:11:12,026 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023. 2023-07-23 05:11:12,026 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023. 2023-07-23 05:11:12,026 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023. after waiting 0 ms 2023-07-23 05:11:12,026 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023. 2023-07-23 05:11:12,026 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023. 2023-07-23 05:11:12,026 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for a481e3d53b99e7162f967c69848d4023: 2023-07-23 05:11:12,028 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 05:11:12,029 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690089072029"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089072029"}]},"ts":"1690089072029"} 2023-07-23 05:11:12,032 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 05:11:12,033 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 05:11:12,033 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089072033"}]},"ts":"1690089072033"} 2023-07-23 05:11:12,034 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-23 05:11:12,038 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-23 05:11:12,038 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:11:12,039 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:11:12,039 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:11:12,039 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:11:12,039 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:11:12,039 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=a481e3d53b99e7162f967c69848d4023, ASSIGN}] 2023-07-23 05:11:12,039 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5a7c8ce13557d67a690695b1b7e5f1b9, NAME => 'hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp 2023-07-23 05:11:12,042 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=a481e3d53b99e7162f967c69848d4023, ASSIGN 2023-07-23 05:11:12,043 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=a481e3d53b99e7162f967c69848d4023, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34197,1690089070852; forceNewPlan=false, retain=false 2023-07-23 05:11:12,059 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:12,059 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 5a7c8ce13557d67a690695b1b7e5f1b9, disabling compactions & flushes 2023-07-23 05:11:12,060 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9. 2023-07-23 05:11:12,060 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9. 2023-07-23 05:11:12,060 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9. after waiting 0 ms 2023-07-23 05:11:12,060 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9. 2023-07-23 05:11:12,060 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9. 2023-07-23 05:11:12,060 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 5a7c8ce13557d67a690695b1b7e5f1b9: 2023-07-23 05:11:12,062 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 05:11:12,063 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690089072063"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089072063"}]},"ts":"1690089072063"} 2023-07-23 05:11:12,064 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 05:11:12,065 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 05:11:12,065 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089072065"}]},"ts":"1690089072065"} 2023-07-23 05:11:12,066 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-23 05:11:12,069 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:11:12,069 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:11:12,069 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:11:12,069 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:11:12,069 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:11:12,069 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=5a7c8ce13557d67a690695b1b7e5f1b9, ASSIGN}] 2023-07-23 05:11:12,071 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=5a7c8ce13557d67a690695b1b7e5f1b9, ASSIGN 2023-07-23 05:11:12,072 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=5a7c8ce13557d67a690695b1b7e5f1b9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34197,1690089070852; forceNewPlan=false, retain=false 2023-07-23 05:11:12,072 INFO [jenkins-hbase4:34851] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-23 05:11:12,074 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=a481e3d53b99e7162f967c69848d4023, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34197,1690089070852 2023-07-23 05:11:12,074 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690089072074"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089072074"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089072074"}]},"ts":"1690089072074"} 2023-07-23 05:11:12,074 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=5a7c8ce13557d67a690695b1b7e5f1b9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34197,1690089070852 2023-07-23 05:11:12,074 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690089072074"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089072074"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089072074"}]},"ts":"1690089072074"} 2023-07-23 05:11:12,077 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure a481e3d53b99e7162f967c69848d4023, server=jenkins-hbase4.apache.org,34197,1690089070852}] 2023-07-23 05:11:12,078 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 5a7c8ce13557d67a690695b1b7e5f1b9, server=jenkins-hbase4.apache.org,34197,1690089070852}] 2023-07-23 05:11:12,229 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34197,1690089070852 2023-07-23 05:11:12,230 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 05:11:12,231 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37868, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 05:11:12,236 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023. 2023-07-23 05:11:12,236 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a481e3d53b99e7162f967c69848d4023, NAME => 'hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:11:12,237 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace a481e3d53b99e7162f967c69848d4023 2023-07-23 05:11:12,237 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:12,237 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a481e3d53b99e7162f967c69848d4023 2023-07-23 05:11:12,237 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a481e3d53b99e7162f967c69848d4023 2023-07-23 05:11:12,239 INFO [StoreOpener-a481e3d53b99e7162f967c69848d4023-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region a481e3d53b99e7162f967c69848d4023 2023-07-23 05:11:12,241 DEBUG [StoreOpener-a481e3d53b99e7162f967c69848d4023-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/namespace/a481e3d53b99e7162f967c69848d4023/info 2023-07-23 05:11:12,241 DEBUG [StoreOpener-a481e3d53b99e7162f967c69848d4023-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/namespace/a481e3d53b99e7162f967c69848d4023/info 2023-07-23 05:11:12,242 INFO [StoreOpener-a481e3d53b99e7162f967c69848d4023-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a481e3d53b99e7162f967c69848d4023 columnFamilyName info 2023-07-23 05:11:12,242 INFO [StoreOpener-a481e3d53b99e7162f967c69848d4023-1] regionserver.HStore(310): Store=a481e3d53b99e7162f967c69848d4023/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:12,243 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/namespace/a481e3d53b99e7162f967c69848d4023 2023-07-23 05:11:12,244 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/namespace/a481e3d53b99e7162f967c69848d4023 2023-07-23 05:11:12,247 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a481e3d53b99e7162f967c69848d4023 2023-07-23 05:11:12,250 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/namespace/a481e3d53b99e7162f967c69848d4023/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:11:12,251 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a481e3d53b99e7162f967c69848d4023; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11925123680, jitterRate=0.11061368882656097}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:11:12,251 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a481e3d53b99e7162f967c69848d4023: 2023-07-23 05:11:12,252 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023., pid=8, masterSystemTime=1690089072229 2023-07-23 05:11:12,256 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023. 2023-07-23 05:11:12,257 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023. 2023-07-23 05:11:12,257 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9. 2023-07-23 05:11:12,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5a7c8ce13557d67a690695b1b7e5f1b9, NAME => 'hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:11:12,257 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=a481e3d53b99e7162f967c69848d4023, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34197,1690089070852 2023-07-23 05:11:12,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 05:11:12,257 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690089072257"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089072257"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089072257"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089072257"}]},"ts":"1690089072257"} 2023-07-23 05:11:12,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9. service=MultiRowMutationService 2023-07-23 05:11:12,257 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-23 05:11:12,257 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 5a7c8ce13557d67a690695b1b7e5f1b9 2023-07-23 05:11:12,258 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:12,258 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5a7c8ce13557d67a690695b1b7e5f1b9 2023-07-23 05:11:12,258 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5a7c8ce13557d67a690695b1b7e5f1b9 2023-07-23 05:11:12,259 INFO [StoreOpener-5a7c8ce13557d67a690695b1b7e5f1b9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 5a7c8ce13557d67a690695b1b7e5f1b9 2023-07-23 05:11:12,261 DEBUG [StoreOpener-5a7c8ce13557d67a690695b1b7e5f1b9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/rsgroup/5a7c8ce13557d67a690695b1b7e5f1b9/m 2023-07-23 05:11:12,261 DEBUG [StoreOpener-5a7c8ce13557d67a690695b1b7e5f1b9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/rsgroup/5a7c8ce13557d67a690695b1b7e5f1b9/m 2023-07-23 05:11:12,261 INFO [StoreOpener-5a7c8ce13557d67a690695b1b7e5f1b9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5a7c8ce13557d67a690695b1b7e5f1b9 columnFamilyName m 2023-07-23 05:11:12,262 INFO [StoreOpener-5a7c8ce13557d67a690695b1b7e5f1b9-1] regionserver.HStore(310): Store=5a7c8ce13557d67a690695b1b7e5f1b9/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:12,263 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-23 05:11:12,263 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/rsgroup/5a7c8ce13557d67a690695b1b7e5f1b9 2023-07-23 05:11:12,263 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure a481e3d53b99e7162f967c69848d4023, server=jenkins-hbase4.apache.org,34197,1690089070852 in 184 msec 2023-07-23 05:11:12,264 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/rsgroup/5a7c8ce13557d67a690695b1b7e5f1b9 2023-07-23 05:11:12,265 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-23 05:11:12,265 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=a481e3d53b99e7162f967c69848d4023, ASSIGN in 224 msec 2023-07-23 05:11:12,266 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 05:11:12,266 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089072266"}]},"ts":"1690089072266"} 2023-07-23 05:11:12,267 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-23 05:11:12,268 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5a7c8ce13557d67a690695b1b7e5f1b9 2023-07-23 05:11:12,270 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 05:11:12,272 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 278 msec 2023-07-23 05:11:12,276 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/rsgroup/5a7c8ce13557d67a690695b1b7e5f1b9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:11:12,276 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5a7c8ce13557d67a690695b1b7e5f1b9; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@2bdc6089, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:11:12,276 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5a7c8ce13557d67a690695b1b7e5f1b9: 2023-07-23 05:11:12,277 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9., pid=9, masterSystemTime=1690089072229 2023-07-23 05:11:12,279 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9. 2023-07-23 05:11:12,279 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9. 2023-07-23 05:11:12,279 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=5a7c8ce13557d67a690695b1b7e5f1b9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34197,1690089070852 2023-07-23 05:11:12,279 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690089072279"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089072279"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089072279"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089072279"}]},"ts":"1690089072279"} 2023-07-23 05:11:12,284 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-23 05:11:12,284 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 5a7c8ce13557d67a690695b1b7e5f1b9, server=jenkins-hbase4.apache.org,34197,1690089070852 in 204 msec 2023-07-23 05:11:12,287 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-23 05:11:12,288 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=5a7c8ce13557d67a690695b1b7e5f1b9, ASSIGN in 215 msec 2023-07-23 05:11:12,288 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 05:11:12,288 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089072288"}]},"ts":"1690089072288"} 2023-07-23 05:11:12,290 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-23 05:11:12,293 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 05:11:12,293 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-23 05:11:12,296 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 283 msec 2023-07-23 05:11:12,296 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-23 05:11:12,298 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:12,304 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 05:11:12,306 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37874, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 05:11:12,314 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-23 05:11:12,323 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34851,1690089070495] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-23 05:11:12,323 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34851,1690089070495] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-23 05:11:12,332 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:12,332 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34851,1690089070495] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:12,332 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 05:11:12,336 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 25 msec 2023-07-23 05:11:12,340 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34851,1690089070495] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 05:11:12,341 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34851,1690089070495] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-23 05:11:12,347 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-23 05:11:12,357 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 05:11:12,360 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 13 msec 2023-07-23 05:11:12,373 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-23 05:11:12,375 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-23 05:11:12,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.167sec 2023-07-23 05:11:12,378 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-23 05:11:12,379 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 05:11:12,380 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-23 05:11:12,380 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-23 05:11:12,382 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 05:11:12,383 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 05:11:12,384 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-23 05:11:12,385 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/hbase/quota/ea99f34122fad5fe8e5f859b676cc9e0 2023-07-23 05:11:12,385 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/hbase/quota/ea99f34122fad5fe8e5f859b676cc9e0 empty. 2023-07-23 05:11:12,386 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/hbase/quota/ea99f34122fad5fe8e5f859b676cc9e0 2023-07-23 05:11:12,386 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-23 05:11:12,393 DEBUG [Listener at localhost/34155] zookeeper.ReadOnlyZKClient(139): Connect 0x5732b517 to 127.0.0.1:60906 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:11:12,394 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-23 05:11:12,394 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-23 05:11:12,397 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:12,398 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:12,398 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-23 05:11:12,398 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-23 05:11:12,398 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34851,1690089070495-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-23 05:11:12,398 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34851,1690089070495-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-23 05:11:12,434 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-23 05:11:12,442 DEBUG [Listener at localhost/34155] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a4e8813, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:11:12,447 DEBUG [hconnection-0x57f67347-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 05:11:12,450 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46848, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 05:11:12,452 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-23 05:11:12,452 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,34851,1690089070495 2023-07-23 05:11:12,453 INFO [Listener at localhost/34155] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:12,457 DEBUG [Listener at localhost/34155] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-23 05:11:12,457 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => ea99f34122fad5fe8e5f859b676cc9e0, NAME => 'hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp 2023-07-23 05:11:12,459 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41328, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-23 05:11:12,463 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-23 05:11:12,463 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:12,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-23 05:11:12,466 DEBUG [Listener at localhost/34155] zookeeper.ReadOnlyZKClient(139): Connect 0x4162bfb9 to 127.0.0.1:60906 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:11:12,480 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:12,481 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing ea99f34122fad5fe8e5f859b676cc9e0, disabling compactions & flushes 2023-07-23 05:11:12,481 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0. 2023-07-23 05:11:12,481 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0. 2023-07-23 05:11:12,481 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0. after waiting 0 ms 2023-07-23 05:11:12,481 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0. 2023-07-23 05:11:12,481 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0. 2023-07-23 05:11:12,481 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for ea99f34122fad5fe8e5f859b676cc9e0: 2023-07-23 05:11:12,484 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 05:11:12,485 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690089072485"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089072485"}]},"ts":"1690089072485"} 2023-07-23 05:11:12,488 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 05:11:12,489 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 05:11:12,489 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089072489"}]},"ts":"1690089072489"} 2023-07-23 05:11:12,490 DEBUG [Listener at localhost/34155] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@256a9099, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:11:12,490 INFO [Listener at localhost/34155] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:60906 2023-07-23 05:11:12,490 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-23 05:11:12,494 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 05:11:12,495 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1019097228a000a connected 2023-07-23 05:11:12,495 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:11:12,496 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:11:12,496 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:11:12,496 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:11:12,496 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:11:12,496 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=ea99f34122fad5fe8e5f859b676cc9e0, ASSIGN}] 2023-07-23 05:11:12,497 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=ea99f34122fad5fe8e5f859b676cc9e0, ASSIGN 2023-07-23 05:11:12,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-23 05:11:12,498 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=ea99f34122fad5fe8e5f859b676cc9e0, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42007,1690089070689; forceNewPlan=false, retain=false 2023-07-23 05:11:12,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-23 05:11:12,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-23 05:11:12,509 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 05:11:12,512 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 13 msec 2023-07-23 05:11:12,606 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-23 05:11:12,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 05:11:12,614 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-23 05:11:12,616 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 05:11:12,616 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-23 05:11:12,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-23 05:11:12,618 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:12,619 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 05:11:12,621 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 05:11:12,622 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/np1/table1/62bee92b84085bc36a6e6bbbc7fda20d 2023-07-23 05:11:12,623 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/np1/table1/62bee92b84085bc36a6e6bbbc7fda20d empty. 2023-07-23 05:11:12,623 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/np1/table1/62bee92b84085bc36a6e6bbbc7fda20d 2023-07-23 05:11:12,624 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-23 05:11:12,637 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-23 05:11:12,638 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 62bee92b84085bc36a6e6bbbc7fda20d, NAME => 'np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp 2023-07-23 05:11:12,649 INFO [jenkins-hbase4:34851] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 05:11:12,650 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=ea99f34122fad5fe8e5f859b676cc9e0, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42007,1690089070689 2023-07-23 05:11:12,651 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690089072650"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089072650"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089072650"}]},"ts":"1690089072650"} 2023-07-23 05:11:12,652 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure ea99f34122fad5fe8e5f859b676cc9e0, server=jenkins-hbase4.apache.org,42007,1690089070689}] 2023-07-23 05:11:12,658 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:12,658 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 62bee92b84085bc36a6e6bbbc7fda20d, disabling compactions & flushes 2023-07-23 05:11:12,658 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d. 2023-07-23 05:11:12,658 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d. 2023-07-23 05:11:12,659 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d. after waiting 0 ms 2023-07-23 05:11:12,659 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d. 2023-07-23 05:11:12,659 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d. 2023-07-23 05:11:12,659 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 62bee92b84085bc36a6e6bbbc7fda20d: 2023-07-23 05:11:12,661 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 05:11:12,662 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690089072662"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089072662"}]},"ts":"1690089072662"} 2023-07-23 05:11:12,663 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 05:11:12,664 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 05:11:12,664 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089072664"}]},"ts":"1690089072664"} 2023-07-23 05:11:12,665 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-23 05:11:12,674 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:11:12,674 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:11:12,674 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:11:12,675 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:11:12,675 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:11:12,675 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=62bee92b84085bc36a6e6bbbc7fda20d, ASSIGN}] 2023-07-23 05:11:12,676 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=62bee92b84085bc36a6e6bbbc7fda20d, ASSIGN 2023-07-23 05:11:12,677 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=62bee92b84085bc36a6e6bbbc7fda20d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,42007,1690089070689; forceNewPlan=false, retain=false 2023-07-23 05:11:12,718 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-23 05:11:12,810 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,42007,1690089070689 2023-07-23 05:11:12,810 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 05:11:12,811 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38804, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 05:11:12,816 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0. 2023-07-23 05:11:12,816 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ea99f34122fad5fe8e5f859b676cc9e0, NAME => 'hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:11:12,816 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota ea99f34122fad5fe8e5f859b676cc9e0 2023-07-23 05:11:12,816 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:12,816 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ea99f34122fad5fe8e5f859b676cc9e0 2023-07-23 05:11:12,816 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ea99f34122fad5fe8e5f859b676cc9e0 2023-07-23 05:11:12,818 INFO [StoreOpener-ea99f34122fad5fe8e5f859b676cc9e0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region ea99f34122fad5fe8e5f859b676cc9e0 2023-07-23 05:11:12,819 DEBUG [StoreOpener-ea99f34122fad5fe8e5f859b676cc9e0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/quota/ea99f34122fad5fe8e5f859b676cc9e0/q 2023-07-23 05:11:12,819 DEBUG [StoreOpener-ea99f34122fad5fe8e5f859b676cc9e0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/quota/ea99f34122fad5fe8e5f859b676cc9e0/q 2023-07-23 05:11:12,819 INFO [StoreOpener-ea99f34122fad5fe8e5f859b676cc9e0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ea99f34122fad5fe8e5f859b676cc9e0 columnFamilyName q 2023-07-23 05:11:12,820 INFO [StoreOpener-ea99f34122fad5fe8e5f859b676cc9e0-1] regionserver.HStore(310): Store=ea99f34122fad5fe8e5f859b676cc9e0/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:12,820 INFO [StoreOpener-ea99f34122fad5fe8e5f859b676cc9e0-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region ea99f34122fad5fe8e5f859b676cc9e0 2023-07-23 05:11:12,821 DEBUG [StoreOpener-ea99f34122fad5fe8e5f859b676cc9e0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/quota/ea99f34122fad5fe8e5f859b676cc9e0/u 2023-07-23 05:11:12,821 DEBUG [StoreOpener-ea99f34122fad5fe8e5f859b676cc9e0-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/quota/ea99f34122fad5fe8e5f859b676cc9e0/u 2023-07-23 05:11:12,821 INFO [StoreOpener-ea99f34122fad5fe8e5f859b676cc9e0-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ea99f34122fad5fe8e5f859b676cc9e0 columnFamilyName u 2023-07-23 05:11:12,822 INFO [StoreOpener-ea99f34122fad5fe8e5f859b676cc9e0-1] regionserver.HStore(310): Store=ea99f34122fad5fe8e5f859b676cc9e0/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:12,823 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/quota/ea99f34122fad5fe8e5f859b676cc9e0 2023-07-23 05:11:12,823 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/quota/ea99f34122fad5fe8e5f859b676cc9e0 2023-07-23 05:11:12,825 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-23 05:11:12,826 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ea99f34122fad5fe8e5f859b676cc9e0 2023-07-23 05:11:12,827 INFO [jenkins-hbase4:34851] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 05:11:12,828 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=62bee92b84085bc36a6e6bbbc7fda20d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,42007,1690089070689 2023-07-23 05:11:12,829 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690089072828"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089072828"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089072828"}]},"ts":"1690089072828"} 2023-07-23 05:11:12,829 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/quota/ea99f34122fad5fe8e5f859b676cc9e0/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:11:12,830 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ea99f34122fad5fe8e5f859b676cc9e0; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10338779840, jitterRate=-0.0371260941028595}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-23 05:11:12,830 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ea99f34122fad5fe8e5f859b676cc9e0: 2023-07-23 05:11:12,830 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0., pid=16, masterSystemTime=1690089072810 2023-07-23 05:11:12,832 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 62bee92b84085bc36a6e6bbbc7fda20d, server=jenkins-hbase4.apache.org,42007,1690089070689}] 2023-07-23 05:11:12,833 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0. 2023-07-23 05:11:12,834 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0. 2023-07-23 05:11:12,834 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=ea99f34122fad5fe8e5f859b676cc9e0, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42007,1690089070689 2023-07-23 05:11:12,834 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690089072834"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089072834"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089072834"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089072834"}]},"ts":"1690089072834"} 2023-07-23 05:11:12,837 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-23 05:11:12,837 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure ea99f34122fad5fe8e5f859b676cc9e0, server=jenkins-hbase4.apache.org,42007,1690089070689 in 183 msec 2023-07-23 05:11:12,838 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-23 05:11:12,838 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=ea99f34122fad5fe8e5f859b676cc9e0, ASSIGN in 341 msec 2023-07-23 05:11:12,839 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 05:11:12,839 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089072839"}]},"ts":"1690089072839"} 2023-07-23 05:11:12,840 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-23 05:11:12,842 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 05:11:12,843 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 463 msec 2023-07-23 05:11:12,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-23 05:11:12,934 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-23 05:11:12,993 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d. 2023-07-23 05:11:12,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 62bee92b84085bc36a6e6bbbc7fda20d, NAME => 'np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:11:12,993 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 62bee92b84085bc36a6e6bbbc7fda20d 2023-07-23 05:11:12,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:12,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 62bee92b84085bc36a6e6bbbc7fda20d 2023-07-23 05:11:12,994 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 62bee92b84085bc36a6e6bbbc7fda20d 2023-07-23 05:11:13,001 INFO [StoreOpener-62bee92b84085bc36a6e6bbbc7fda20d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 62bee92b84085bc36a6e6bbbc7fda20d 2023-07-23 05:11:13,003 DEBUG [StoreOpener-62bee92b84085bc36a6e6bbbc7fda20d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/np1/table1/62bee92b84085bc36a6e6bbbc7fda20d/fam1 2023-07-23 05:11:13,003 DEBUG [StoreOpener-62bee92b84085bc36a6e6bbbc7fda20d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/np1/table1/62bee92b84085bc36a6e6bbbc7fda20d/fam1 2023-07-23 05:11:13,004 INFO [StoreOpener-62bee92b84085bc36a6e6bbbc7fda20d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 62bee92b84085bc36a6e6bbbc7fda20d columnFamilyName fam1 2023-07-23 05:11:13,004 INFO [StoreOpener-62bee92b84085bc36a6e6bbbc7fda20d-1] regionserver.HStore(310): Store=62bee92b84085bc36a6e6bbbc7fda20d/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:13,005 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/np1/table1/62bee92b84085bc36a6e6bbbc7fda20d 2023-07-23 05:11:13,006 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/np1/table1/62bee92b84085bc36a6e6bbbc7fda20d 2023-07-23 05:11:13,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 62bee92b84085bc36a6e6bbbc7fda20d 2023-07-23 05:11:13,011 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/np1/table1/62bee92b84085bc36a6e6bbbc7fda20d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:11:13,012 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 62bee92b84085bc36a6e6bbbc7fda20d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9939505600, jitterRate=-0.07431140542030334}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:11:13,012 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 62bee92b84085bc36a6e6bbbc7fda20d: 2023-07-23 05:11:13,013 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d., pid=18, masterSystemTime=1690089072984 2023-07-23 05:11:13,014 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d. 2023-07-23 05:11:13,014 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d. 2023-07-23 05:11:13,015 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=62bee92b84085bc36a6e6bbbc7fda20d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,42007,1690089070689 2023-07-23 05:11:13,015 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690089073015"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089073015"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089073015"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089073015"}]},"ts":"1690089073015"} 2023-07-23 05:11:13,018 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-23 05:11:13,018 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 62bee92b84085bc36a6e6bbbc7fda20d, server=jenkins-hbase4.apache.org,42007,1690089070689 in 184 msec 2023-07-23 05:11:13,021 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-23 05:11:13,021 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=62bee92b84085bc36a6e6bbbc7fda20d, ASSIGN in 343 msec 2023-07-23 05:11:13,023 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 05:11:13,023 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089073023"}]},"ts":"1690089073023"} 2023-07-23 05:11:13,024 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-23 05:11:13,026 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 05:11:13,028 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 414 msec 2023-07-23 05:11:13,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-23 05:11:13,221 INFO [Listener at localhost/34155] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-23 05:11:13,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 05:11:13,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-23 05:11:13,225 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 05:11:13,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-23 05:11:13,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-23 05:11:13,241 DEBUG [PEWorker-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 05:11:13,243 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38818, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 05:11:13,246 INFO [PEWorker-2] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=22 msec 2023-07-23 05:11:13,327 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-23 05:11:13,329 INFO [Listener at localhost/34155] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-23 05:11:13,330 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:13,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:13,332 INFO [Listener at localhost/34155] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-23 05:11:13,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-23 05:11:13,333 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-23 05:11:13,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-23 05:11:13,336 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089073335"}]},"ts":"1690089073335"} 2023-07-23 05:11:13,337 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-23 05:11:13,338 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-23 05:11:13,339 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=62bee92b84085bc36a6e6bbbc7fda20d, UNASSIGN}] 2023-07-23 05:11:13,340 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=62bee92b84085bc36a6e6bbbc7fda20d, UNASSIGN 2023-07-23 05:11:13,340 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=62bee92b84085bc36a6e6bbbc7fda20d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,42007,1690089070689 2023-07-23 05:11:13,340 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690089073340"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089073340"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089073340"}]},"ts":"1690089073340"} 2023-07-23 05:11:13,342 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 62bee92b84085bc36a6e6bbbc7fda20d, server=jenkins-hbase4.apache.org,42007,1690089070689}] 2023-07-23 05:11:13,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-23 05:11:13,494 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 62bee92b84085bc36a6e6bbbc7fda20d 2023-07-23 05:11:13,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 62bee92b84085bc36a6e6bbbc7fda20d, disabling compactions & flushes 2023-07-23 05:11:13,495 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d. 2023-07-23 05:11:13,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d. 2023-07-23 05:11:13,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d. after waiting 0 ms 2023-07-23 05:11:13,495 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d. 2023-07-23 05:11:13,500 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/np1/table1/62bee92b84085bc36a6e6bbbc7fda20d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:11:13,501 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d. 2023-07-23 05:11:13,501 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 62bee92b84085bc36a6e6bbbc7fda20d: 2023-07-23 05:11:13,507 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 62bee92b84085bc36a6e6bbbc7fda20d 2023-07-23 05:11:13,508 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=62bee92b84085bc36a6e6bbbc7fda20d, regionState=CLOSED 2023-07-23 05:11:13,508 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690089073508"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089073508"}]},"ts":"1690089073508"} 2023-07-23 05:11:13,511 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-23 05:11:13,511 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 62bee92b84085bc36a6e6bbbc7fda20d, server=jenkins-hbase4.apache.org,42007,1690089070689 in 168 msec 2023-07-23 05:11:13,512 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-23 05:11:13,512 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=62bee92b84085bc36a6e6bbbc7fda20d, UNASSIGN in 172 msec 2023-07-23 05:11:13,513 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089073513"}]},"ts":"1690089073513"} 2023-07-23 05:11:13,514 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-23 05:11:13,517 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-23 05:11:13,518 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 185 msec 2023-07-23 05:11:13,638 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-23 05:11:13,638 INFO [Listener at localhost/34155] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-23 05:11:13,639 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-23 05:11:13,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-23 05:11:13,641 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-23 05:11:13,642 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-23 05:11:13,642 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-23 05:11:13,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:13,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 05:11:13,646 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/np1/table1/62bee92b84085bc36a6e6bbbc7fda20d 2023-07-23 05:11:13,647 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-23 05:11:13,648 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/np1/table1/62bee92b84085bc36a6e6bbbc7fda20d/fam1, FileablePath, hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/np1/table1/62bee92b84085bc36a6e6bbbc7fda20d/recovered.edits] 2023-07-23 05:11:13,653 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/np1/table1/62bee92b84085bc36a6e6bbbc7fda20d/recovered.edits/4.seqid to hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/archive/data/np1/table1/62bee92b84085bc36a6e6bbbc7fda20d/recovered.edits/4.seqid 2023-07-23 05:11:13,653 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/.tmp/data/np1/table1/62bee92b84085bc36a6e6bbbc7fda20d 2023-07-23 05:11:13,653 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-23 05:11:13,655 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-23 05:11:13,657 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-23 05:11:13,658 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-23 05:11:13,659 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-23 05:11:13,659 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-23 05:11:13,659 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089073659"}]},"ts":"9223372036854775807"} 2023-07-23 05:11:13,661 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 05:11:13,661 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 62bee92b84085bc36a6e6bbbc7fda20d, NAME => 'np1:table1,,1690089072611.62bee92b84085bc36a6e6bbbc7fda20d.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 05:11:13,661 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-23 05:11:13,661 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690089073661"}]},"ts":"9223372036854775807"} 2023-07-23 05:11:13,662 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-23 05:11:13,665 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-23 05:11:13,666 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 26 msec 2023-07-23 05:11:13,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-23 05:11:13,754 INFO [Listener at localhost/34155] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-23 05:11:13,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-23 05:11:13,771 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-23 05:11:13,774 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-23 05:11:13,778 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-23 05:11:13,780 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-23 05:11:13,781 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-23 05:11:13,782 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-23 05:11:13,782 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 05:11:13,783 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-23 05:11:13,785 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-23 05:11:13,786 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 23 msec 2023-07-23 05:11:13,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34851] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-23 05:11:13,883 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-23 05:11:13,883 INFO [Listener at localhost/34155] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-23 05:11:13,883 DEBUG [Listener at localhost/34155] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5732b517 to 127.0.0.1:60906 2023-07-23 05:11:13,883 DEBUG [Listener at localhost/34155] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:13,883 DEBUG [Listener at localhost/34155] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-23 05:11:13,883 DEBUG [Listener at localhost/34155] util.JVMClusterUtil(257): Found active master hash=113604932, stopped=false 2023-07-23 05:11:13,883 DEBUG [Listener at localhost/34155] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 05:11:13,883 DEBUG [Listener at localhost/34155] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 05:11:13,883 DEBUG [Listener at localhost/34155] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-23 05:11:13,884 INFO [Listener at localhost/34155] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,34851,1690089070495 2023-07-23 05:11:13,885 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:13,886 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:42007-0x1019097228a0001, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:13,886 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:13,886 INFO [Listener at localhost/34155] procedure2.ProcedureExecutor(629): Stopping 2023-07-23 05:11:13,886 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:34197-0x1019097228a0002, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:13,886 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:34321-0x1019097228a0003, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:13,887 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:42007-0x1019097228a0001, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:13,887 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:13,888 DEBUG [Listener at localhost/34155] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x051fcfe4 to 127.0.0.1:60906 2023-07-23 05:11:13,888 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34321-0x1019097228a0003, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:13,888 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34197-0x1019097228a0002, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:13,888 DEBUG [Listener at localhost/34155] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:13,888 INFO [Listener at localhost/34155] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,42007,1690089070689' ***** 2023-07-23 05:11:13,888 INFO [Listener at localhost/34155] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 05:11:13,888 INFO [Listener at localhost/34155] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34197,1690089070852' ***** 2023-07-23 05:11:13,889 INFO [Listener at localhost/34155] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 05:11:13,888 INFO [RS:0;jenkins-hbase4:42007] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 05:11:13,889 INFO [Listener at localhost/34155] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34321,1690089071024' ***** 2023-07-23 05:11:13,889 INFO [Listener at localhost/34155] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 05:11:13,889 INFO [RS:1;jenkins-hbase4:34197] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 05:11:13,890 INFO [RS:2;jenkins-hbase4:34321] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 05:11:13,904 INFO [RS:0;jenkins-hbase4:42007] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@19a55b52{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:11:13,904 INFO [RS:1;jenkins-hbase4:34197] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@44267f0{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:11:13,905 INFO [RS:2;jenkins-hbase4:34321] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5c5ad983{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:11:13,905 INFO [RS:1;jenkins-hbase4:34197] server.AbstractConnector(383): Stopped ServerConnector@82d2458{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 05:11:13,905 INFO [RS:0;jenkins-hbase4:42007] server.AbstractConnector(383): Stopped ServerConnector@62fe5a63{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 05:11:13,905 INFO [RS:1;jenkins-hbase4:34197] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 05:11:13,905 INFO [RS:0;jenkins-hbase4:42007] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 05:11:13,905 INFO [RS:2;jenkins-hbase4:34321] server.AbstractConnector(383): Stopped ServerConnector@789ac99f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 05:11:13,905 INFO [RS:1;jenkins-hbase4:34197] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7b6d1808{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 05:11:13,907 INFO [RS:2;jenkins-hbase4:34321] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 05:11:13,908 INFO [RS:1;jenkins-hbase4:34197] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1ac8173c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/hadoop.log.dir/,STOPPED} 2023-07-23 05:11:13,907 INFO [RS:0;jenkins-hbase4:42007] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@14dd322{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 05:11:13,908 INFO [RS:2;jenkins-hbase4:34321] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6a2ebbcc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 05:11:13,908 INFO [RS:0;jenkins-hbase4:42007] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1cc8b87a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/hadoop.log.dir/,STOPPED} 2023-07-23 05:11:13,908 INFO [RS:2;jenkins-hbase4:34321] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1782cf7b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/hadoop.log.dir/,STOPPED} 2023-07-23 05:11:13,909 INFO [RS:0;jenkins-hbase4:42007] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 05:11:13,909 INFO [RS:2;jenkins-hbase4:34321] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 05:11:13,909 INFO [RS:0;jenkins-hbase4:42007] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 05:11:13,909 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 05:11:13,909 INFO [RS:0;jenkins-hbase4:42007] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 05:11:13,910 INFO [RS:0;jenkins-hbase4:42007] regionserver.HRegionServer(3305): Received CLOSE for ea99f34122fad5fe8e5f859b676cc9e0 2023-07-23 05:11:13,910 INFO [RS:1;jenkins-hbase4:34197] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 05:11:13,911 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ea99f34122fad5fe8e5f859b676cc9e0, disabling compactions & flushes 2023-07-23 05:11:13,911 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 05:11:13,909 INFO [RS:2;jenkins-hbase4:34321] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 05:11:13,909 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 05:11:13,911 INFO [RS:2;jenkins-hbase4:34321] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 05:11:13,911 INFO [RS:1;jenkins-hbase4:34197] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 05:11:13,911 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0. 2023-07-23 05:11:13,910 INFO [RS:0;jenkins-hbase4:42007] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,42007,1690089070689 2023-07-23 05:11:13,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0. 2023-07-23 05:11:13,912 INFO [RS:1;jenkins-hbase4:34197] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 05:11:13,912 INFO [RS:2;jenkins-hbase4:34321] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34321,1690089071024 2023-07-23 05:11:13,912 INFO [RS:1;jenkins-hbase4:34197] regionserver.HRegionServer(3305): Received CLOSE for 5a7c8ce13557d67a690695b1b7e5f1b9 2023-07-23 05:11:13,912 DEBUG [RS:2;jenkins-hbase4:34321] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x244bccb4 to 127.0.0.1:60906 2023-07-23 05:11:13,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0. after waiting 0 ms 2023-07-23 05:11:13,912 DEBUG [RS:2;jenkins-hbase4:34321] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:13,912 DEBUG [RS:0;jenkins-hbase4:42007] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x0a5b8acc to 127.0.0.1:60906 2023-07-23 05:11:13,912 INFO [RS:2;jenkins-hbase4:34321] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 05:11:13,913 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5a7c8ce13557d67a690695b1b7e5f1b9, disabling compactions & flushes 2023-07-23 05:11:13,912 INFO [RS:1;jenkins-hbase4:34197] regionserver.HRegionServer(3305): Received CLOSE for a481e3d53b99e7162f967c69848d4023 2023-07-23 05:11:13,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0. 2023-07-23 05:11:13,913 INFO [RS:1;jenkins-hbase4:34197] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34197,1690089070852 2023-07-23 05:11:13,913 DEBUG [RS:1;jenkins-hbase4:34197] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x662c945b to 127.0.0.1:60906 2023-07-23 05:11:13,913 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9. 2023-07-23 05:11:13,913 INFO [RS:2;jenkins-hbase4:34321] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 05:11:13,913 INFO [RS:2;jenkins-hbase4:34321] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 05:11:13,913 INFO [RS:2;jenkins-hbase4:34321] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-23 05:11:13,912 DEBUG [RS:0;jenkins-hbase4:42007] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:13,914 INFO [RS:2;jenkins-hbase4:34321] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-23 05:11:13,913 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9. 2023-07-23 05:11:13,913 DEBUG [RS:1;jenkins-hbase4:34197] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:13,914 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9. after waiting 0 ms 2023-07-23 05:11:13,914 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 05:11:13,914 DEBUG [RS:2;jenkins-hbase4:34321] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-23 05:11:13,915 DEBUG [RS:2;jenkins-hbase4:34321] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-23 05:11:13,914 INFO [RS:0;jenkins-hbase4:42007] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-23 05:11:13,914 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9. 2023-07-23 05:11:13,914 INFO [RS:1;jenkins-hbase4:34197] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-23 05:11:13,915 DEBUG [RS:1;jenkins-hbase4:34197] regionserver.HRegionServer(1478): Online Regions={5a7c8ce13557d67a690695b1b7e5f1b9=hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9., a481e3d53b99e7162f967c69848d4023=hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023.} 2023-07-23 05:11:13,915 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 05:11:13,915 DEBUG [RS:1;jenkins-hbase4:34197] regionserver.HRegionServer(1504): Waiting on 5a7c8ce13557d67a690695b1b7e5f1b9, a481e3d53b99e7162f967c69848d4023 2023-07-23 05:11:13,915 DEBUG [RS:0;jenkins-hbase4:42007] regionserver.HRegionServer(1478): Online Regions={ea99f34122fad5fe8e5f859b676cc9e0=hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0.} 2023-07-23 05:11:13,915 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 05:11:13,915 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 5a7c8ce13557d67a690695b1b7e5f1b9 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-23 05:11:13,915 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 05:11:13,915 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 05:11:13,915 DEBUG [RS:0;jenkins-hbase4:42007] regionserver.HRegionServer(1504): Waiting on ea99f34122fad5fe8e5f859b676cc9e0 2023-07-23 05:11:13,915 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-23 05:11:13,921 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/quota/ea99f34122fad5fe8e5f859b676cc9e0/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:11:13,922 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0. 2023-07-23 05:11:13,922 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ea99f34122fad5fe8e5f859b676cc9e0: 2023-07-23 05:11:13,923 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1690089072378.ea99f34122fad5fe8e5f859b676cc9e0. 2023-07-23 05:11:13,939 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:13,939 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:13,940 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:13,942 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/rsgroup/5a7c8ce13557d67a690695b1b7e5f1b9/.tmp/m/de597a27792b43ae93835f5cf615e6e0 2023-07-23 05:11:13,943 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/.tmp/info/abc4fd825998414b8a0ee7b4940626af 2023-07-23 05:11:13,951 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/rsgroup/5a7c8ce13557d67a690695b1b7e5f1b9/.tmp/m/de597a27792b43ae93835f5cf615e6e0 as hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/rsgroup/5a7c8ce13557d67a690695b1b7e5f1b9/m/de597a27792b43ae93835f5cf615e6e0 2023-07-23 05:11:13,957 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for abc4fd825998414b8a0ee7b4940626af 2023-07-23 05:11:13,958 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/rsgroup/5a7c8ce13557d67a690695b1b7e5f1b9/m/de597a27792b43ae93835f5cf615e6e0, entries=1, sequenceid=7, filesize=4.9 K 2023-07-23 05:11:13,961 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for 5a7c8ce13557d67a690695b1b7e5f1b9 in 46ms, sequenceid=7, compaction requested=false 2023-07-23 05:11:13,961 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-23 05:11:13,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/rsgroup/5a7c8ce13557d67a690695b1b7e5f1b9/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-23 05:11:13,973 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 05:11:13,974 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9. 2023-07-23 05:11:13,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5a7c8ce13557d67a690695b1b7e5f1b9: 2023-07-23 05:11:13,974 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690089072010.5a7c8ce13557d67a690695b1b7e5f1b9. 2023-07-23 05:11:13,975 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/.tmp/rep_barrier/a5d770f19e6c4114931269b6b72b39aa 2023-07-23 05:11:13,977 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a481e3d53b99e7162f967c69848d4023, disabling compactions & flushes 2023-07-23 05:11:13,977 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023. 2023-07-23 05:11:13,977 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023. 2023-07-23 05:11:13,977 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023. after waiting 0 ms 2023-07-23 05:11:13,977 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023. 2023-07-23 05:11:13,977 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing a481e3d53b99e7162f967c69848d4023 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-23 05:11:13,983 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a5d770f19e6c4114931269b6b72b39aa 2023-07-23 05:11:14,007 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/namespace/a481e3d53b99e7162f967c69848d4023/.tmp/info/ef19bca502fa4bb4a8f6e1df149e7af5 2023-07-23 05:11:14,013 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ef19bca502fa4bb4a8f6e1df149e7af5 2023-07-23 05:11:14,014 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/namespace/a481e3d53b99e7162f967c69848d4023/.tmp/info/ef19bca502fa4bb4a8f6e1df149e7af5 as hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/namespace/a481e3d53b99e7162f967c69848d4023/info/ef19bca502fa4bb4a8f6e1df149e7af5 2023-07-23 05:11:14,022 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ef19bca502fa4bb4a8f6e1df149e7af5 2023-07-23 05:11:14,022 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/namespace/a481e3d53b99e7162f967c69848d4023/info/ef19bca502fa4bb4a8f6e1df149e7af5, entries=3, sequenceid=8, filesize=5.0 K 2023-07-23 05:11:14,023 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for a481e3d53b99e7162f967c69848d4023 in 46ms, sequenceid=8, compaction requested=false 2023-07-23 05:11:14,023 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-23 05:11:14,030 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/namespace/a481e3d53b99e7162f967c69848d4023/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-23 05:11:14,031 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023. 2023-07-23 05:11:14,031 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a481e3d53b99e7162f967c69848d4023: 2023-07-23 05:11:14,031 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690089071992.a481e3d53b99e7162f967c69848d4023. 2023-07-23 05:11:14,115 DEBUG [RS:2;jenkins-hbase4:34321] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-23 05:11:14,115 INFO [RS:1;jenkins-hbase4:34197] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34197,1690089070852; all regions closed. 2023-07-23 05:11:14,115 DEBUG [RS:1;jenkins-hbase4:34197] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-23 05:11:14,115 INFO [RS:0;jenkins-hbase4:42007] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,42007,1690089070689; all regions closed. 2023-07-23 05:11:14,116 DEBUG [RS:0;jenkins-hbase4:42007] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-23 05:11:14,123 DEBUG [RS:0;jenkins-hbase4:42007] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/oldWALs 2023-07-23 05:11:14,123 DEBUG [RS:1;jenkins-hbase4:34197] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/oldWALs 2023-07-23 05:11:14,123 INFO [RS:1;jenkins-hbase4:34197] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34197%2C1690089070852:(num 1690089071704) 2023-07-23 05:11:14,123 INFO [RS:0;jenkins-hbase4:42007] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C42007%2C1690089070689:(num 1690089071727) 2023-07-23 05:11:14,123 DEBUG [RS:0;jenkins-hbase4:42007] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:14,123 DEBUG [RS:1;jenkins-hbase4:34197] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:14,123 INFO [RS:0;jenkins-hbase4:42007] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:14,123 INFO [RS:1;jenkins-hbase4:34197] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:14,124 INFO [RS:0;jenkins-hbase4:42007] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 05:11:14,124 INFO [RS:0;jenkins-hbase4:42007] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 05:11:14,124 INFO [RS:1;jenkins-hbase4:34197] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 05:11:14,124 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 05:11:14,124 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 05:11:14,124 INFO [RS:1;jenkins-hbase4:34197] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 05:11:14,124 INFO [RS:0;jenkins-hbase4:42007] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 05:11:14,124 INFO [RS:0;jenkins-hbase4:42007] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 05:11:14,124 INFO [RS:1;jenkins-hbase4:34197] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 05:11:14,124 INFO [RS:1;jenkins-hbase4:34197] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 05:11:14,125 INFO [RS:1;jenkins-hbase4:34197] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34197 2023-07-23 05:11:14,126 INFO [RS:0;jenkins-hbase4:42007] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:42007 2023-07-23 05:11:14,128 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:34321-0x1019097228a0003, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42007,1690089070689 2023-07-23 05:11:14,129 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:34197-0x1019097228a0002, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42007,1690089070689 2023-07-23 05:11:14,129 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:34197-0x1019097228a0002, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:14,128 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:14,129 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:42007-0x1019097228a0001, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,42007,1690089070689 2023-07-23 05:11:14,129 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:34321-0x1019097228a0003, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:14,129 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:42007-0x1019097228a0001, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:14,130 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:34197-0x1019097228a0002, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34197,1690089070852 2023-07-23 05:11:14,130 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:42007-0x1019097228a0001, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34197,1690089070852 2023-07-23 05:11:14,130 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:34321-0x1019097228a0003, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34197,1690089070852 2023-07-23 05:11:14,130 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34197,1690089070852] 2023-07-23 05:11:14,130 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34197,1690089070852; numProcessing=1 2023-07-23 05:11:14,133 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34197,1690089070852 already deleted, retry=false 2023-07-23 05:11:14,133 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34197,1690089070852 expired; onlineServers=2 2023-07-23 05:11:14,133 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,42007,1690089070689] 2023-07-23 05:11:14,133 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,42007,1690089070689; numProcessing=2 2023-07-23 05:11:14,135 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,42007,1690089070689 already deleted, retry=false 2023-07-23 05:11:14,135 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,42007,1690089070689 expired; onlineServers=1 2023-07-23 05:11:14,315 DEBUG [RS:2;jenkins-hbase4:34321] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-23 05:11:14,386 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:34197-0x1019097228a0002, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:14,386 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:34197-0x1019097228a0002, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:14,386 INFO [RS:1;jenkins-hbase4:34197] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34197,1690089070852; zookeeper connection closed. 2023-07-23 05:11:14,387 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@685a0678] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@685a0678 2023-07-23 05:11:14,407 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/.tmp/table/db4fe29dce61444e83fd64a9c830511f 2023-07-23 05:11:14,413 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for db4fe29dce61444e83fd64a9c830511f 2023-07-23 05:11:14,414 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/.tmp/info/abc4fd825998414b8a0ee7b4940626af as hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/info/abc4fd825998414b8a0ee7b4940626af 2023-07-23 05:11:14,421 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for abc4fd825998414b8a0ee7b4940626af 2023-07-23 05:11:14,421 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/info/abc4fd825998414b8a0ee7b4940626af, entries=32, sequenceid=31, filesize=8.5 K 2023-07-23 05:11:14,422 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/.tmp/rep_barrier/a5d770f19e6c4114931269b6b72b39aa as hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/rep_barrier/a5d770f19e6c4114931269b6b72b39aa 2023-07-23 05:11:14,427 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a5d770f19e6c4114931269b6b72b39aa 2023-07-23 05:11:14,427 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/rep_barrier/a5d770f19e6c4114931269b6b72b39aa, entries=1, sequenceid=31, filesize=4.9 K 2023-07-23 05:11:14,428 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/.tmp/table/db4fe29dce61444e83fd64a9c830511f as hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/table/db4fe29dce61444e83fd64a9c830511f 2023-07-23 05:11:14,434 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for db4fe29dce61444e83fd64a9c830511f 2023-07-23 05:11:14,435 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/table/db4fe29dce61444e83fd64a9c830511f, entries=8, sequenceid=31, filesize=5.2 K 2023-07-23 05:11:14,435 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 520ms, sequenceid=31, compaction requested=false 2023-07-23 05:11:14,435 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-23 05:11:14,444 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-23 05:11:14,444 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 05:11:14,444 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 05:11:14,445 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 05:11:14,445 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-23 05:11:14,486 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:42007-0x1019097228a0001, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:14,486 INFO [RS:0;jenkins-hbase4:42007] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,42007,1690089070689; zookeeper connection closed. 2023-07-23 05:11:14,486 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:42007-0x1019097228a0001, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:14,488 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@256431df] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@256431df 2023-07-23 05:11:14,515 INFO [RS:2;jenkins-hbase4:34321] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34321,1690089071024; all regions closed. 2023-07-23 05:11:14,516 DEBUG [RS:2;jenkins-hbase4:34321] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-23 05:11:14,522 DEBUG [RS:2;jenkins-hbase4:34321] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/oldWALs 2023-07-23 05:11:14,522 INFO [RS:2;jenkins-hbase4:34321] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34321%2C1690089071024.meta:.meta(num 1690089071936) 2023-07-23 05:11:14,528 DEBUG [RS:2;jenkins-hbase4:34321] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/oldWALs 2023-07-23 05:11:14,528 INFO [RS:2;jenkins-hbase4:34321] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34321%2C1690089071024:(num 1690089071704) 2023-07-23 05:11:14,528 DEBUG [RS:2;jenkins-hbase4:34321] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:14,528 INFO [RS:2;jenkins-hbase4:34321] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:14,528 INFO [RS:2;jenkins-hbase4:34321] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 05:11:14,528 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 05:11:14,529 INFO [RS:2;jenkins-hbase4:34321] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34321 2023-07-23 05:11:14,532 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:14,532 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:34321-0x1019097228a0003, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34321,1690089071024 2023-07-23 05:11:14,534 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34321,1690089071024] 2023-07-23 05:11:14,534 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34321,1690089071024; numProcessing=3 2023-07-23 05:11:14,536 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34321,1690089071024 already deleted, retry=false 2023-07-23 05:11:14,536 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34321,1690089071024 expired; onlineServers=0 2023-07-23 05:11:14,536 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34851,1690089070495' ***** 2023-07-23 05:11:14,536 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-23 05:11:14,536 DEBUG [M:0;jenkins-hbase4:34851] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2fafd44, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 05:11:14,536 INFO [M:0;jenkins-hbase4:34851] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 05:11:14,538 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-23 05:11:14,538 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:14,538 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 05:11:14,538 INFO [M:0;jenkins-hbase4:34851] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2e164a3a{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 05:11:14,539 INFO [M:0;jenkins-hbase4:34851] server.AbstractConnector(383): Stopped ServerConnector@78016569{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 05:11:14,539 INFO [M:0;jenkins-hbase4:34851] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 05:11:14,539 INFO [M:0;jenkins-hbase4:34851] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2e75a497{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 05:11:14,539 INFO [M:0;jenkins-hbase4:34851] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@780bfb43{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/hadoop.log.dir/,STOPPED} 2023-07-23 05:11:14,540 INFO [M:0;jenkins-hbase4:34851] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34851,1690089070495 2023-07-23 05:11:14,540 INFO [M:0;jenkins-hbase4:34851] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34851,1690089070495; all regions closed. 2023-07-23 05:11:14,540 DEBUG [M:0;jenkins-hbase4:34851] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:14,540 INFO [M:0;jenkins-hbase4:34851] master.HMaster(1491): Stopping master jetty server 2023-07-23 05:11:14,541 INFO [M:0;jenkins-hbase4:34851] server.AbstractConnector(383): Stopped ServerConnector@683b1fa6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 05:11:14,541 DEBUG [M:0;jenkins-hbase4:34851] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-23 05:11:14,541 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-23 05:11:14,542 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690089071436] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690089071436,5,FailOnTimeoutGroup] 2023-07-23 05:11:14,542 DEBUG [M:0;jenkins-hbase4:34851] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-23 05:11:14,542 INFO [M:0;jenkins-hbase4:34851] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-23 05:11:14,542 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690089071440] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690089071440,5,FailOnTimeoutGroup] 2023-07-23 05:11:14,542 INFO [M:0;jenkins-hbase4:34851] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-23 05:11:14,543 INFO [M:0;jenkins-hbase4:34851] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 05:11:14,543 DEBUG [M:0;jenkins-hbase4:34851] master.HMaster(1512): Stopping service threads 2023-07-23 05:11:14,543 INFO [M:0;jenkins-hbase4:34851] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-23 05:11:14,544 ERROR [M:0;jenkins-hbase4:34851] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-23 05:11:14,544 INFO [M:0;jenkins-hbase4:34851] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-23 05:11:14,544 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-23 05:11:14,544 DEBUG [M:0;jenkins-hbase4:34851] zookeeper.ZKUtil(398): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-23 05:11:14,544 WARN [M:0;jenkins-hbase4:34851] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-23 05:11:14,544 INFO [M:0;jenkins-hbase4:34851] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-23 05:11:14,545 INFO [M:0;jenkins-hbase4:34851] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-23 05:11:14,545 DEBUG [M:0;jenkins-hbase4:34851] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 05:11:14,545 INFO [M:0;jenkins-hbase4:34851] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:14,545 DEBUG [M:0;jenkins-hbase4:34851] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:14,545 DEBUG [M:0;jenkins-hbase4:34851] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 05:11:14,545 DEBUG [M:0;jenkins-hbase4:34851] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:14,545 INFO [M:0;jenkins-hbase4:34851] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.95 KB heapSize=109.10 KB 2023-07-23 05:11:14,560 INFO [M:0;jenkins-hbase4:34851] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.95 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/737595c321724470b0cad82cfc27cf40 2023-07-23 05:11:14,566 DEBUG [M:0;jenkins-hbase4:34851] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/737595c321724470b0cad82cfc27cf40 as hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/737595c321724470b0cad82cfc27cf40 2023-07-23 05:11:14,571 INFO [M:0;jenkins-hbase4:34851] regionserver.HStore(1080): Added hdfs://localhost:44369/user/jenkins/test-data/8641a5b1-b43a-6f9f-dd05-080e6010a0bf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/737595c321724470b0cad82cfc27cf40, entries=24, sequenceid=194, filesize=12.4 K 2023-07-23 05:11:14,572 INFO [M:0;jenkins-hbase4:34851] regionserver.HRegion(2948): Finished flush of dataSize ~92.95 KB/95179, heapSize ~109.09 KB/111704, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 27ms, sequenceid=194, compaction requested=false 2023-07-23 05:11:14,574 INFO [M:0;jenkins-hbase4:34851] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:14,574 DEBUG [M:0;jenkins-hbase4:34851] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 05:11:14,578 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 05:11:14,578 INFO [M:0;jenkins-hbase4:34851] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-23 05:11:14,579 INFO [M:0;jenkins-hbase4:34851] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34851 2023-07-23 05:11:14,580 DEBUG [M:0;jenkins-hbase4:34851] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,34851,1690089070495 already deleted, retry=false 2023-07-23 05:11:14,635 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:34321-0x1019097228a0003, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:14,635 INFO [RS:2;jenkins-hbase4:34321] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34321,1690089071024; zookeeper connection closed. 2023-07-23 05:11:14,635 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): regionserver:34321-0x1019097228a0003, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:14,635 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@49a4eaea] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@49a4eaea 2023-07-23 05:11:14,635 INFO [Listener at localhost/34155] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-23 05:11:14,735 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:14,735 INFO [M:0;jenkins-hbase4:34851] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34851,1690089070495; zookeeper connection closed. 2023-07-23 05:11:14,735 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): master:34851-0x1019097228a0000, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:14,736 WARN [Listener at localhost/34155] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 05:11:14,744 INFO [Listener at localhost/34155] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 05:11:14,850 WARN [BP-155397880-172.31.14.131-1690089069606 heartbeating to localhost/127.0.0.1:44369] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 05:11:14,850 WARN [BP-155397880-172.31.14.131-1690089069606 heartbeating to localhost/127.0.0.1:44369] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-155397880-172.31.14.131-1690089069606 (Datanode Uuid 3780d2af-7256-4639-ab06-824f61bbee2f) service to localhost/127.0.0.1:44369 2023-07-23 05:11:14,851 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/cluster_599f895e-8c1f-63ff-627f-5d6c5c28e013/dfs/data/data5/current/BP-155397880-172.31.14.131-1690089069606] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 05:11:14,851 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/cluster_599f895e-8c1f-63ff-627f-5d6c5c28e013/dfs/data/data6/current/BP-155397880-172.31.14.131-1690089069606] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 05:11:14,853 WARN [Listener at localhost/34155] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 05:11:14,856 INFO [Listener at localhost/34155] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 05:11:14,960 WARN [BP-155397880-172.31.14.131-1690089069606 heartbeating to localhost/127.0.0.1:44369] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 05:11:14,960 WARN [BP-155397880-172.31.14.131-1690089069606 heartbeating to localhost/127.0.0.1:44369] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-155397880-172.31.14.131-1690089069606 (Datanode Uuid 2e17e9f3-f2b9-4b78-9364-f72d802852be) service to localhost/127.0.0.1:44369 2023-07-23 05:11:14,961 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/cluster_599f895e-8c1f-63ff-627f-5d6c5c28e013/dfs/data/data3/current/BP-155397880-172.31.14.131-1690089069606] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 05:11:14,961 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/cluster_599f895e-8c1f-63ff-627f-5d6c5c28e013/dfs/data/data4/current/BP-155397880-172.31.14.131-1690089069606] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 05:11:14,962 WARN [Listener at localhost/34155] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 05:11:14,965 INFO [Listener at localhost/34155] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 05:11:15,073 WARN [BP-155397880-172.31.14.131-1690089069606 heartbeating to localhost/127.0.0.1:44369] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 05:11:15,073 WARN [BP-155397880-172.31.14.131-1690089069606 heartbeating to localhost/127.0.0.1:44369] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-155397880-172.31.14.131-1690089069606 (Datanode Uuid c5e774cd-89a3-4f5e-9980-3cc55251aea2) service to localhost/127.0.0.1:44369 2023-07-23 05:11:15,074 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/cluster_599f895e-8c1f-63ff-627f-5d6c5c28e013/dfs/data/data1/current/BP-155397880-172.31.14.131-1690089069606] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 05:11:15,074 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/cluster_599f895e-8c1f-63ff-627f-5d6c5c28e013/dfs/data/data2/current/BP-155397880-172.31.14.131-1690089069606] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 05:11:15,085 INFO [Listener at localhost/34155] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 05:11:15,201 INFO [Listener at localhost/34155] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-23 05:11:15,230 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-23 05:11:15,230 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-23 05:11:15,230 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/hadoop.log.dir so I do NOT create it in target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa 2023-07-23 05:11:15,230 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/e5462a79-6102-6d72-1294-bfd010974c13/hadoop.tmp.dir so I do NOT create it in target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa 2023-07-23 05:11:15,230 INFO [Listener at localhost/34155] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b, deleteOnExit=true 2023-07-23 05:11:15,230 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-23 05:11:15,231 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/test.cache.data in system properties and HBase conf 2023-07-23 05:11:15,231 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/hadoop.tmp.dir in system properties and HBase conf 2023-07-23 05:11:15,231 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/hadoop.log.dir in system properties and HBase conf 2023-07-23 05:11:15,231 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-23 05:11:15,231 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-23 05:11:15,231 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-23 05:11:15,231 DEBUG [Listener at localhost/34155] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-23 05:11:15,232 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-23 05:11:15,232 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-23 05:11:15,232 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-23 05:11:15,232 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 05:11:15,232 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-23 05:11:15,232 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-23 05:11:15,232 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-23 05:11:15,232 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 05:11:15,232 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-23 05:11:15,233 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/nfs.dump.dir in system properties and HBase conf 2023-07-23 05:11:15,233 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/java.io.tmpdir in system properties and HBase conf 2023-07-23 05:11:15,233 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-23 05:11:15,233 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-23 05:11:15,233 INFO [Listener at localhost/34155] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-23 05:11:15,252 WARN [Listener at localhost/34155] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 05:11:15,252 WARN [Listener at localhost/34155] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 05:11:15,293 WARN [Listener at localhost/34155] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 05:11:15,296 INFO [Listener at localhost/34155] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 05:11:15,300 DEBUG [Listener at localhost/34155-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1019097228a000a, quorum=127.0.0.1:60906, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-23 05:11:15,300 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1019097228a000a, quorum=127.0.0.1:60906, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-23 05:11:15,300 INFO [Listener at localhost/34155] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/java.io.tmpdir/Jetty_localhost_34193_hdfs____.jo0rik/webapp 2023-07-23 05:11:15,397 INFO [Listener at localhost/34155] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34193 2023-07-23 05:11:15,401 WARN [Listener at localhost/34155] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-23 05:11:15,401 WARN [Listener at localhost/34155] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-23 05:11:15,441 WARN [Listener at localhost/37269] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 05:11:15,452 WARN [Listener at localhost/37269] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 05:11:15,454 WARN [Listener at localhost/37269] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 05:11:15,456 INFO [Listener at localhost/37269] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 05:11:15,461 INFO [Listener at localhost/37269] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/java.io.tmpdir/Jetty_localhost_37375_datanode____.5b0wfg/webapp 2023-07-23 05:11:15,555 INFO [Listener at localhost/37269] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37375 2023-07-23 05:11:15,563 WARN [Listener at localhost/33363] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 05:11:15,576 WARN [Listener at localhost/33363] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 05:11:15,578 WARN [Listener at localhost/33363] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 05:11:15,579 INFO [Listener at localhost/33363] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 05:11:15,584 INFO [Listener at localhost/33363] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/java.io.tmpdir/Jetty_localhost_34311_datanode____ikgfp5/webapp 2023-07-23 05:11:15,671 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x47b3772398a14f04: Processing first storage report for DS-aec4ff92-dd92-46fd-b8b6-504567634b69 from datanode 23ebb0ec-1e0a-4f8d-b243-54b97e902f2b 2023-07-23 05:11:15,671 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x47b3772398a14f04: from storage DS-aec4ff92-dd92-46fd-b8b6-504567634b69 node DatanodeRegistration(127.0.0.1:40503, datanodeUuid=23ebb0ec-1e0a-4f8d-b243-54b97e902f2b, infoPort=34571, infoSecurePort=0, ipcPort=33363, storageInfo=lv=-57;cid=testClusterID;nsid=91701207;c=1690089075257), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 05:11:15,671 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x47b3772398a14f04: Processing first storage report for DS-200422ca-3546-4dec-90bd-ce0707a9ebb3 from datanode 23ebb0ec-1e0a-4f8d-b243-54b97e902f2b 2023-07-23 05:11:15,671 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x47b3772398a14f04: from storage DS-200422ca-3546-4dec-90bd-ce0707a9ebb3 node DatanodeRegistration(127.0.0.1:40503, datanodeUuid=23ebb0ec-1e0a-4f8d-b243-54b97e902f2b, infoPort=34571, infoSecurePort=0, ipcPort=33363, storageInfo=lv=-57;cid=testClusterID;nsid=91701207;c=1690089075257), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 05:11:15,686 INFO [Listener at localhost/33363] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34311 2023-07-23 05:11:15,692 WARN [Listener at localhost/38451] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 05:11:15,709 WARN [Listener at localhost/38451] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-23 05:11:15,711 WARN [Listener at localhost/38451] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-23 05:11:15,712 INFO [Listener at localhost/38451] log.Slf4jLog(67): jetty-6.1.26 2023-07-23 05:11:15,715 INFO [Listener at localhost/38451] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/java.io.tmpdir/Jetty_localhost_43833_datanode____5kc9i8/webapp 2023-07-23 05:11:15,788 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbdae32579b45e403: Processing first storage report for DS-454ed3e1-3fd5-469f-b62e-757e0072e1df from datanode 518cc289-9869-4810-8163-4679c2187ddc 2023-07-23 05:11:15,788 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbdae32579b45e403: from storage DS-454ed3e1-3fd5-469f-b62e-757e0072e1df node DatanodeRegistration(127.0.0.1:40565, datanodeUuid=518cc289-9869-4810-8163-4679c2187ddc, infoPort=44881, infoSecurePort=0, ipcPort=38451, storageInfo=lv=-57;cid=testClusterID;nsid=91701207;c=1690089075257), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 05:11:15,788 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbdae32579b45e403: Processing first storage report for DS-ee386304-e39e-4385-a245-0686ec9ea20a from datanode 518cc289-9869-4810-8163-4679c2187ddc 2023-07-23 05:11:15,788 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbdae32579b45e403: from storage DS-ee386304-e39e-4385-a245-0686ec9ea20a node DatanodeRegistration(127.0.0.1:40565, datanodeUuid=518cc289-9869-4810-8163-4679c2187ddc, infoPort=44881, infoSecurePort=0, ipcPort=38451, storageInfo=lv=-57;cid=testClusterID;nsid=91701207;c=1690089075257), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 05:11:15,816 INFO [Listener at localhost/38451] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43833 2023-07-23 05:11:15,822 WARN [Listener at localhost/46717] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-23 05:11:15,925 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x92b810e94c2096db: Processing first storage report for DS-6a1c8e59-5803-4e41-ad5b-ca1c403eb19b from datanode 876a6bf7-ca6c-4d8d-90b5-b7bcc7b9b9ff 2023-07-23 05:11:15,926 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x92b810e94c2096db: from storage DS-6a1c8e59-5803-4e41-ad5b-ca1c403eb19b node DatanodeRegistration(127.0.0.1:42863, datanodeUuid=876a6bf7-ca6c-4d8d-90b5-b7bcc7b9b9ff, infoPort=45213, infoSecurePort=0, ipcPort=46717, storageInfo=lv=-57;cid=testClusterID;nsid=91701207;c=1690089075257), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 05:11:15,926 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x92b810e94c2096db: Processing first storage report for DS-dfb61a41-f582-45e4-b022-c3510d013403 from datanode 876a6bf7-ca6c-4d8d-90b5-b7bcc7b9b9ff 2023-07-23 05:11:15,926 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x92b810e94c2096db: from storage DS-dfb61a41-f582-45e4-b022-c3510d013403 node DatanodeRegistration(127.0.0.1:42863, datanodeUuid=876a6bf7-ca6c-4d8d-90b5-b7bcc7b9b9ff, infoPort=45213, infoSecurePort=0, ipcPort=46717, storageInfo=lv=-57;cid=testClusterID;nsid=91701207;c=1690089075257), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-23 05:11:15,928 DEBUG [Listener at localhost/46717] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa 2023-07-23 05:11:15,930 INFO [Listener at localhost/46717] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/zookeeper_0, clientPort=51330, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-23 05:11:15,931 INFO [Listener at localhost/46717] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=51330 2023-07-23 05:11:15,931 INFO [Listener at localhost/46717] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:15,932 INFO [Listener at localhost/46717] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:15,956 INFO [Listener at localhost/46717] util.FSUtils(471): Created version file at hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72 with version=8 2023-07-23 05:11:15,956 INFO [Listener at localhost/46717] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:36893/user/jenkins/test-data/2061e502-bc60-87a5-42e4-c8026a8c9b04/hbase-staging 2023-07-23 05:11:15,957 DEBUG [Listener at localhost/46717] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-23 05:11:15,957 DEBUG [Listener at localhost/46717] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-23 05:11:15,957 DEBUG [Listener at localhost/46717] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-23 05:11:15,957 DEBUG [Listener at localhost/46717] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-23 05:11:15,958 INFO [Listener at localhost/46717] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 05:11:15,958 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:15,959 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:15,959 INFO [Listener at localhost/46717] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 05:11:15,959 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:15,959 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 05:11:15,959 INFO [Listener at localhost/46717] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 05:11:15,963 INFO [Listener at localhost/46717] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34969 2023-07-23 05:11:15,963 INFO [Listener at localhost/46717] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:15,964 INFO [Listener at localhost/46717] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:15,965 INFO [Listener at localhost/46717] zookeeper.RecoverableZooKeeper(93): Process identifier=master:34969 connecting to ZooKeeper ensemble=127.0.0.1:51330 2023-07-23 05:11:15,975 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:349690x0, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 05:11:15,976 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:34969-0x101909737dc0000 connected 2023-07-23 05:11:15,989 DEBUG [Listener at localhost/46717] zookeeper.ZKUtil(164): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 05:11:15,990 DEBUG [Listener at localhost/46717] zookeeper.ZKUtil(164): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:15,990 DEBUG [Listener at localhost/46717] zookeeper.ZKUtil(164): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 05:11:15,990 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34969 2023-07-23 05:11:15,990 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34969 2023-07-23 05:11:15,991 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34969 2023-07-23 05:11:15,991 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34969 2023-07-23 05:11:15,991 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34969 2023-07-23 05:11:15,993 INFO [Listener at localhost/46717] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 05:11:15,993 INFO [Listener at localhost/46717] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 05:11:15,993 INFO [Listener at localhost/46717] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 05:11:15,994 INFO [Listener at localhost/46717] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-23 05:11:15,994 INFO [Listener at localhost/46717] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 05:11:15,994 INFO [Listener at localhost/46717] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 05:11:15,994 INFO [Listener at localhost/46717] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 05:11:15,995 INFO [Listener at localhost/46717] http.HttpServer(1146): Jetty bound to port 42277 2023-07-23 05:11:15,995 INFO [Listener at localhost/46717] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 05:11:15,996 INFO [Listener at localhost/46717] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:15,996 INFO [Listener at localhost/46717] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@338f5eef{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/hadoop.log.dir/,AVAILABLE} 2023-07-23 05:11:15,997 INFO [Listener at localhost/46717] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:15,997 INFO [Listener at localhost/46717] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@41ffc7bc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 05:11:16,112 INFO [Listener at localhost/46717] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 05:11:16,114 INFO [Listener at localhost/46717] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 05:11:16,114 INFO [Listener at localhost/46717] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 05:11:16,115 INFO [Listener at localhost/46717] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 05:11:16,116 INFO [Listener at localhost/46717] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:16,117 INFO [Listener at localhost/46717] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1ec2cbd0{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/java.io.tmpdir/jetty-0_0_0_0-42277-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1068817930439820143/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 05:11:16,118 INFO [Listener at localhost/46717] server.AbstractConnector(333): Started ServerConnector@60773ed9{HTTP/1.1, (http/1.1)}{0.0.0.0:42277} 2023-07-23 05:11:16,119 INFO [Listener at localhost/46717] server.Server(415): Started @40958ms 2023-07-23 05:11:16,119 INFO [Listener at localhost/46717] master.HMaster(444): hbase.rootdir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72, hbase.cluster.distributed=false 2023-07-23 05:11:16,138 INFO [Listener at localhost/46717] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 05:11:16,138 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:16,138 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:16,138 INFO [Listener at localhost/46717] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 05:11:16,138 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:16,138 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 05:11:16,139 INFO [Listener at localhost/46717] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 05:11:16,139 INFO [Listener at localhost/46717] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35827 2023-07-23 05:11:16,140 INFO [Listener at localhost/46717] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 05:11:16,141 DEBUG [Listener at localhost/46717] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 05:11:16,141 INFO [Listener at localhost/46717] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:16,142 INFO [Listener at localhost/46717] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:16,143 INFO [Listener at localhost/46717] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35827 connecting to ZooKeeper ensemble=127.0.0.1:51330 2023-07-23 05:11:16,146 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:358270x0, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 05:11:16,148 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35827-0x101909737dc0001 connected 2023-07-23 05:11:16,148 DEBUG [Listener at localhost/46717] zookeeper.ZKUtil(164): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 05:11:16,149 DEBUG [Listener at localhost/46717] zookeeper.ZKUtil(164): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:16,149 DEBUG [Listener at localhost/46717] zookeeper.ZKUtil(164): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 05:11:16,151 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35827 2023-07-23 05:11:16,152 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35827 2023-07-23 05:11:16,153 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35827 2023-07-23 05:11:16,155 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35827 2023-07-23 05:11:16,155 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35827 2023-07-23 05:11:16,156 INFO [Listener at localhost/46717] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 05:11:16,156 INFO [Listener at localhost/46717] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 05:11:16,157 INFO [Listener at localhost/46717] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 05:11:16,157 INFO [Listener at localhost/46717] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 05:11:16,157 INFO [Listener at localhost/46717] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 05:11:16,157 INFO [Listener at localhost/46717] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 05:11:16,157 INFO [Listener at localhost/46717] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 05:11:16,158 INFO [Listener at localhost/46717] http.HttpServer(1146): Jetty bound to port 46767 2023-07-23 05:11:16,158 INFO [Listener at localhost/46717] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 05:11:16,159 INFO [Listener at localhost/46717] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:16,160 INFO [Listener at localhost/46717] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@52e37a10{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/hadoop.log.dir/,AVAILABLE} 2023-07-23 05:11:16,160 INFO [Listener at localhost/46717] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:16,161 INFO [Listener at localhost/46717] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2f89cebb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 05:11:16,281 INFO [Listener at localhost/46717] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 05:11:16,282 INFO [Listener at localhost/46717] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 05:11:16,282 INFO [Listener at localhost/46717] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 05:11:16,282 INFO [Listener at localhost/46717] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 05:11:16,283 INFO [Listener at localhost/46717] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:16,284 INFO [Listener at localhost/46717] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@3a3e072e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/java.io.tmpdir/jetty-0_0_0_0-46767-hbase-server-2_4_18-SNAPSHOT_jar-_-any-9163358963050964750/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:11:16,285 INFO [Listener at localhost/46717] server.AbstractConnector(333): Started ServerConnector@70813e9d{HTTP/1.1, (http/1.1)}{0.0.0.0:46767} 2023-07-23 05:11:16,285 INFO [Listener at localhost/46717] server.Server(415): Started @41125ms 2023-07-23 05:11:16,297 INFO [Listener at localhost/46717] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 05:11:16,297 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:16,297 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:16,297 INFO [Listener at localhost/46717] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 05:11:16,297 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:16,297 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 05:11:16,298 INFO [Listener at localhost/46717] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 05:11:16,298 INFO [Listener at localhost/46717] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43649 2023-07-23 05:11:16,299 INFO [Listener at localhost/46717] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 05:11:16,300 DEBUG [Listener at localhost/46717] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 05:11:16,300 INFO [Listener at localhost/46717] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:16,301 INFO [Listener at localhost/46717] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:16,302 INFO [Listener at localhost/46717] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43649 connecting to ZooKeeper ensemble=127.0.0.1:51330 2023-07-23 05:11:16,307 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:436490x0, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 05:11:16,308 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43649-0x101909737dc0002 connected 2023-07-23 05:11:16,308 DEBUG [Listener at localhost/46717] zookeeper.ZKUtil(164): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 05:11:16,308 DEBUG [Listener at localhost/46717] zookeeper.ZKUtil(164): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:16,309 DEBUG [Listener at localhost/46717] zookeeper.ZKUtil(164): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 05:11:16,310 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43649 2023-07-23 05:11:16,311 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43649 2023-07-23 05:11:16,311 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43649 2023-07-23 05:11:16,311 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43649 2023-07-23 05:11:16,311 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43649 2023-07-23 05:11:16,313 INFO [Listener at localhost/46717] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 05:11:16,313 INFO [Listener at localhost/46717] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 05:11:16,313 INFO [Listener at localhost/46717] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 05:11:16,314 INFO [Listener at localhost/46717] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 05:11:16,314 INFO [Listener at localhost/46717] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 05:11:16,314 INFO [Listener at localhost/46717] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 05:11:16,314 INFO [Listener at localhost/46717] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 05:11:16,315 INFO [Listener at localhost/46717] http.HttpServer(1146): Jetty bound to port 44685 2023-07-23 05:11:16,315 INFO [Listener at localhost/46717] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 05:11:16,317 INFO [Listener at localhost/46717] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:16,317 INFO [Listener at localhost/46717] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@487d57f8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/hadoop.log.dir/,AVAILABLE} 2023-07-23 05:11:16,318 INFO [Listener at localhost/46717] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:16,318 INFO [Listener at localhost/46717] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6dc816f5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 05:11:16,433 INFO [Listener at localhost/46717] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 05:11:16,434 INFO [Listener at localhost/46717] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 05:11:16,434 INFO [Listener at localhost/46717] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 05:11:16,435 INFO [Listener at localhost/46717] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 05:11:16,436 INFO [Listener at localhost/46717] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:16,436 INFO [Listener at localhost/46717] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@70ea8f0d{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/java.io.tmpdir/jetty-0_0_0_0-44685-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8455323380611805563/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:11:16,439 INFO [Listener at localhost/46717] server.AbstractConnector(333): Started ServerConnector@60a59e1f{HTTP/1.1, (http/1.1)}{0.0.0.0:44685} 2023-07-23 05:11:16,439 INFO [Listener at localhost/46717] server.Server(415): Started @41279ms 2023-07-23 05:11:16,453 INFO [Listener at localhost/46717] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 05:11:16,453 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:16,453 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:16,453 INFO [Listener at localhost/46717] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 05:11:16,454 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:16,454 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 05:11:16,454 INFO [Listener at localhost/46717] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 05:11:16,454 INFO [Listener at localhost/46717] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44971 2023-07-23 05:11:16,455 INFO [Listener at localhost/46717] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 05:11:16,456 DEBUG [Listener at localhost/46717] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 05:11:16,456 INFO [Listener at localhost/46717] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:16,457 INFO [Listener at localhost/46717] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:16,458 INFO [Listener at localhost/46717] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44971 connecting to ZooKeeper ensemble=127.0.0.1:51330 2023-07-23 05:11:16,461 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:449710x0, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 05:11:16,463 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44971-0x101909737dc0003 connected 2023-07-23 05:11:16,463 DEBUG [Listener at localhost/46717] zookeeper.ZKUtil(164): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 05:11:16,463 DEBUG [Listener at localhost/46717] zookeeper.ZKUtil(164): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:16,463 DEBUG [Listener at localhost/46717] zookeeper.ZKUtil(164): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 05:11:16,464 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44971 2023-07-23 05:11:16,464 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44971 2023-07-23 05:11:16,466 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44971 2023-07-23 05:11:16,468 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44971 2023-07-23 05:11:16,468 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44971 2023-07-23 05:11:16,470 INFO [Listener at localhost/46717] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 05:11:16,470 INFO [Listener at localhost/46717] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 05:11:16,470 INFO [Listener at localhost/46717] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 05:11:16,470 INFO [Listener at localhost/46717] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 05:11:16,470 INFO [Listener at localhost/46717] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 05:11:16,470 INFO [Listener at localhost/46717] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 05:11:16,470 INFO [Listener at localhost/46717] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 05:11:16,471 INFO [Listener at localhost/46717] http.HttpServer(1146): Jetty bound to port 40545 2023-07-23 05:11:16,471 INFO [Listener at localhost/46717] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 05:11:16,472 INFO [Listener at localhost/46717] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:16,472 INFO [Listener at localhost/46717] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@55105c90{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/hadoop.log.dir/,AVAILABLE} 2023-07-23 05:11:16,473 INFO [Listener at localhost/46717] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:16,473 INFO [Listener at localhost/46717] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6fc3bb30{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 05:11:16,595 INFO [Listener at localhost/46717] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 05:11:16,596 INFO [Listener at localhost/46717] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 05:11:16,597 INFO [Listener at localhost/46717] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 05:11:16,597 INFO [Listener at localhost/46717] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-23 05:11:16,598 INFO [Listener at localhost/46717] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:16,599 INFO [Listener at localhost/46717] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@61508a33{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/java.io.tmpdir/jetty-0_0_0_0-40545-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3149806069043275417/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:11:16,601 INFO [Listener at localhost/46717] server.AbstractConnector(333): Started ServerConnector@6a4b4809{HTTP/1.1, (http/1.1)}{0.0.0.0:40545} 2023-07-23 05:11:16,601 INFO [Listener at localhost/46717] server.Server(415): Started @41441ms 2023-07-23 05:11:16,603 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 05:11:16,608 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@4fd3b347{HTTP/1.1, (http/1.1)}{0.0.0.0:34819} 2023-07-23 05:11:16,608 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @41448ms 2023-07-23 05:11:16,609 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,34969,1690089075958 2023-07-23 05:11:16,610 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 05:11:16,610 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,34969,1690089075958 2023-07-23 05:11:16,611 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 05:11:16,611 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 05:11:16,611 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 05:11:16,611 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-23 05:11:16,613 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:16,614 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 05:11:16,615 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,34969,1690089075958 from backup master directory 2023-07-23 05:11:16,615 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 05:11:16,616 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,34969,1690089075958 2023-07-23 05:11:16,617 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-23 05:11:16,617 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 05:11:16,617 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,34969,1690089075958 2023-07-23 05:11:16,635 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/hbase.id with ID: b3c294d6-bf9a-47ec-a205-e8c34f4cecc4 2023-07-23 05:11:16,651 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:16,653 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:16,663 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x446486fd to 127.0.0.1:51330 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:11:16,668 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3ca6418a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:11:16,668 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 05:11:16,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-23 05:11:16,669 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 05:11:16,671 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/MasterData/data/master/store-tmp 2023-07-23 05:11:16,681 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:16,681 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 05:11:16,681 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:16,681 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:16,681 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 05:11:16,681 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:16,681 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:16,681 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 05:11:16,682 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/MasterData/WALs/jenkins-hbase4.apache.org,34969,1690089075958 2023-07-23 05:11:16,685 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34969%2C1690089075958, suffix=, logDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/MasterData/WALs/jenkins-hbase4.apache.org,34969,1690089075958, archiveDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/MasterData/oldWALs, maxLogs=10 2023-07-23 05:11:16,703 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42863,DS-6a1c8e59-5803-4e41-ad5b-ca1c403eb19b,DISK] 2023-07-23 05:11:16,703 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40503,DS-aec4ff92-dd92-46fd-b8b6-504567634b69,DISK] 2023-07-23 05:11:16,703 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40565,DS-454ed3e1-3fd5-469f-b62e-757e0072e1df,DISK] 2023-07-23 05:11:16,708 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/MasterData/WALs/jenkins-hbase4.apache.org,34969,1690089075958/jenkins-hbase4.apache.org%2C34969%2C1690089075958.1690089076685 2023-07-23 05:11:16,708 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42863,DS-6a1c8e59-5803-4e41-ad5b-ca1c403eb19b,DISK], DatanodeInfoWithStorage[127.0.0.1:40503,DS-aec4ff92-dd92-46fd-b8b6-504567634b69,DISK], DatanodeInfoWithStorage[127.0.0.1:40565,DS-454ed3e1-3fd5-469f-b62e-757e0072e1df,DISK]] 2023-07-23 05:11:16,708 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:11:16,708 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:16,708 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 05:11:16,708 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 05:11:16,711 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-23 05:11:16,712 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-23 05:11:16,713 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-23 05:11:16,713 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:16,714 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 05:11:16,714 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-23 05:11:16,717 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-23 05:11:16,719 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:11:16,720 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11996350560, jitterRate=0.11724720895290375}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:11:16,720 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 05:11:16,720 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-23 05:11:16,722 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-23 05:11:16,722 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-23 05:11:16,722 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-23 05:11:16,723 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-23 05:11:16,723 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-23 05:11:16,723 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-23 05:11:16,727 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-23 05:11:16,728 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-23 05:11:16,729 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-23 05:11:16,729 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-23 05:11:16,730 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-23 05:11:16,733 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:16,734 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-23 05:11:16,734 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-23 05:11:16,735 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-23 05:11:16,736 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:16,736 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:16,737 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:16,737 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:16,737 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:16,738 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,34969,1690089075958, sessionid=0x101909737dc0000, setting cluster-up flag (Was=false) 2023-07-23 05:11:16,742 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:16,751 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-23 05:11:16,752 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34969,1690089075958 2023-07-23 05:11:16,761 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:16,768 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-23 05:11:16,769 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,34969,1690089075958 2023-07-23 05:11:16,769 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.hbase-snapshot/.tmp 2023-07-23 05:11:16,770 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-23 05:11:16,771 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-23 05:11:16,771 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-23 05:11:16,772 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-23 05:11:16,772 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34969,1690089075958] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 05:11:16,779 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-23 05:11:16,793 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 05:11:16,793 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 05:11:16,793 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-23 05:11:16,793 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-23 05:11:16,793 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 05:11:16,793 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 05:11:16,793 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 05:11:16,793 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-23 05:11:16,793 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-23 05:11:16,793 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,793 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 05:11:16,793 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690089106800 2023-07-23 05:11:16,800 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 05:11:16,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-23 05:11:16,800 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-23 05:11:16,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-23 05:11:16,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-23 05:11:16,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-23 05:11:16,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-23 05:11:16,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-23 05:11:16,801 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 05:11:16,802 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,816 INFO [RS:0;jenkins-hbase4:35827] regionserver.HRegionServer(951): ClusterId : b3c294d6-bf9a-47ec-a205-e8c34f4cecc4 2023-07-23 05:11:16,816 DEBUG [RS:0;jenkins-hbase4:35827] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 05:11:16,817 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-23 05:11:16,817 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-23 05:11:16,817 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-23 05:11:16,817 INFO [RS:1;jenkins-hbase4:43649] regionserver.HRegionServer(951): ClusterId : b3c294d6-bf9a-47ec-a205-e8c34f4cecc4 2023-07-23 05:11:16,818 DEBUG [RS:1;jenkins-hbase4:43649] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 05:11:16,818 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-23 05:11:16,818 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-23 05:11:16,819 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690089076819,5,FailOnTimeoutGroup] 2023-07-23 05:11:16,819 INFO [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(951): ClusterId : b3c294d6-bf9a-47ec-a205-e8c34f4cecc4 2023-07-23 05:11:16,819 DEBUG [RS:2;jenkins-hbase4:44971] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 05:11:16,823 DEBUG [RS:0;jenkins-hbase4:35827] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 05:11:16,823 DEBUG [RS:0;jenkins-hbase4:35827] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 05:11:16,824 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690089076819,5,FailOnTimeoutGroup] 2023-07-23 05:11:16,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-23 05:11:16,825 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,826 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,826 DEBUG [RS:2;jenkins-hbase4:44971] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 05:11:16,826 DEBUG [RS:2;jenkins-hbase4:44971] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 05:11:16,827 DEBUG [RS:0;jenkins-hbase4:35827] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 05:11:16,830 DEBUG [RS:1;jenkins-hbase4:43649] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 05:11:16,830 DEBUG [RS:1;jenkins-hbase4:43649] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 05:11:16,830 DEBUG [RS:0;jenkins-hbase4:35827] zookeeper.ReadOnlyZKClient(139): Connect 0x4b5dcbf3 to 127.0.0.1:51330 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:11:16,832 DEBUG [RS:2;jenkins-hbase4:44971] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 05:11:16,832 DEBUG [RS:1;jenkins-hbase4:43649] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 05:11:16,838 DEBUG [RS:2;jenkins-hbase4:44971] zookeeper.ReadOnlyZKClient(139): Connect 0x7f20dc29 to 127.0.0.1:51330 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:11:16,839 DEBUG [RS:1;jenkins-hbase4:43649] zookeeper.ReadOnlyZKClient(139): Connect 0x222ba5ad to 127.0.0.1:51330 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:11:16,853 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 05:11:16,853 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-23 05:11:16,854 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72 2023-07-23 05:11:16,854 DEBUG [RS:2;jenkins-hbase4:44971] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1e680c42, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:11:16,855 DEBUG [RS:2;jenkins-hbase4:44971] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4e15e4db, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 05:11:16,866 DEBUG [RS:1;jenkins-hbase4:43649] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2cec4f98, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:11:16,867 DEBUG [RS:1;jenkins-hbase4:43649] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@28d3cedd, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 05:11:16,875 DEBUG [RS:0;jenkins-hbase4:35827] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@459a3ce8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:11:16,875 DEBUG [RS:0;jenkins-hbase4:35827] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@41a5a083, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 05:11:16,876 DEBUG [RS:2;jenkins-hbase4:44971] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:44971 2023-07-23 05:11:16,876 INFO [RS:2;jenkins-hbase4:44971] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 05:11:16,876 INFO [RS:2;jenkins-hbase4:44971] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 05:11:16,876 DEBUG [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 05:11:16,877 INFO [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34969,1690089075958 with isa=jenkins-hbase4.apache.org/172.31.14.131:44971, startcode=1690089076453 2023-07-23 05:11:16,877 DEBUG [RS:2;jenkins-hbase4:44971] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 05:11:16,878 DEBUG [RS:1;jenkins-hbase4:43649] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:43649 2023-07-23 05:11:16,878 INFO [RS:1;jenkins-hbase4:43649] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 05:11:16,878 INFO [RS:1;jenkins-hbase4:43649] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 05:11:16,878 DEBUG [RS:1;jenkins-hbase4:43649] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 05:11:16,878 INFO [RS:1;jenkins-hbase4:43649] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34969,1690089075958 with isa=jenkins-hbase4.apache.org/172.31.14.131:43649, startcode=1690089076297 2023-07-23 05:11:16,878 DEBUG [RS:1;jenkins-hbase4:43649] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 05:11:16,882 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43019, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 05:11:16,883 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37817, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 05:11:16,889 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34969] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44971,1690089076453 2023-07-23 05:11:16,889 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34969,1690089075958] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 05:11:16,890 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34969,1690089075958] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-23 05:11:16,890 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34969] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43649,1690089076297 2023-07-23 05:11:16,890 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34969,1690089075958] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 05:11:16,890 DEBUG [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72 2023-07-23 05:11:16,890 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34969,1690089075958] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-23 05:11:16,890 DEBUG [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37269 2023-07-23 05:11:16,890 DEBUG [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42277 2023-07-23 05:11:16,890 DEBUG [RS:1;jenkins-hbase4:43649] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72 2023-07-23 05:11:16,890 DEBUG [RS:1;jenkins-hbase4:43649] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37269 2023-07-23 05:11:16,890 DEBUG [RS:1;jenkins-hbase4:43649] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42277 2023-07-23 05:11:16,894 DEBUG [RS:0;jenkins-hbase4:35827] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:35827 2023-07-23 05:11:16,894 INFO [RS:0;jenkins-hbase4:35827] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 05:11:16,894 INFO [RS:0;jenkins-hbase4:35827] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 05:11:16,894 DEBUG [RS:0;jenkins-hbase4:35827] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 05:11:16,895 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:16,896 DEBUG [RS:2;jenkins-hbase4:44971] zookeeper.ZKUtil(162): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44971,1690089076453 2023-07-23 05:11:16,896 DEBUG [RS:1;jenkins-hbase4:43649] zookeeper.ZKUtil(162): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43649,1690089076297 2023-07-23 05:11:16,896 WARN [RS:2;jenkins-hbase4:44971] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 05:11:16,896 WARN [RS:1;jenkins-hbase4:43649] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 05:11:16,896 INFO [RS:2;jenkins-hbase4:44971] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 05:11:16,896 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43649,1690089076297] 2023-07-23 05:11:16,896 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44971,1690089076453] 2023-07-23 05:11:16,896 DEBUG [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/WALs/jenkins-hbase4.apache.org,44971,1690089076453 2023-07-23 05:11:16,896 INFO [RS:0;jenkins-hbase4:35827] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34969,1690089075958 with isa=jenkins-hbase4.apache.org/172.31.14.131:35827, startcode=1690089076137 2023-07-23 05:11:16,896 INFO [RS:1;jenkins-hbase4:43649] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 05:11:16,896 DEBUG [RS:0;jenkins-hbase4:35827] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 05:11:16,896 DEBUG [RS:1;jenkins-hbase4:43649] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/WALs/jenkins-hbase4.apache.org,43649,1690089076297 2023-07-23 05:11:16,899 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:32797, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 05:11:16,900 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34969] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:16,900 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34969,1690089075958] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 05:11:16,900 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34969,1690089075958] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-23 05:11:16,903 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:16,904 DEBUG [RS:0;jenkins-hbase4:35827] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72 2023-07-23 05:11:16,904 DEBUG [RS:0;jenkins-hbase4:35827] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37269 2023-07-23 05:11:16,904 DEBUG [RS:0;jenkins-hbase4:35827] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42277 2023-07-23 05:11:16,905 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 05:11:16,905 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:16,905 DEBUG [RS:1;jenkins-hbase4:43649] zookeeper.ZKUtil(162): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44971,1690089076453 2023-07-23 05:11:16,905 DEBUG [RS:0;jenkins-hbase4:35827] zookeeper.ZKUtil(162): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:16,905 DEBUG [RS:2;jenkins-hbase4:44971] zookeeper.ZKUtil(162): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44971,1690089076453 2023-07-23 05:11:16,906 WARN [RS:0;jenkins-hbase4:35827] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 05:11:16,906 INFO [RS:0;jenkins-hbase4:35827] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 05:11:16,906 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35827,1690089076137] 2023-07-23 05:11:16,906 DEBUG [RS:1;jenkins-hbase4:43649] zookeeper.ZKUtil(162): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43649,1690089076297 2023-07-23 05:11:16,906 DEBUG [RS:0;jenkins-hbase4:35827] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/WALs/jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:16,906 DEBUG [RS:1;jenkins-hbase4:43649] zookeeper.ZKUtil(162): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:16,906 DEBUG [RS:2;jenkins-hbase4:44971] zookeeper.ZKUtil(162): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43649,1690089076297 2023-07-23 05:11:16,907 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/info 2023-07-23 05:11:16,907 DEBUG [RS:2;jenkins-hbase4:44971] zookeeper.ZKUtil(162): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:16,908 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 05:11:16,908 DEBUG [RS:1;jenkins-hbase4:43649] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 05:11:16,908 INFO [RS:1;jenkins-hbase4:43649] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 05:11:16,908 DEBUG [RS:2;jenkins-hbase4:44971] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 05:11:16,909 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:16,909 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 05:11:16,910 INFO [RS:2;jenkins-hbase4:44971] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 05:11:16,910 INFO [RS:1;jenkins-hbase4:43649] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 05:11:16,911 INFO [RS:1;jenkins-hbase4:43649] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 05:11:16,911 INFO [RS:1;jenkins-hbase4:43649] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,911 INFO [RS:1;jenkins-hbase4:43649] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 05:11:16,912 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/rep_barrier 2023-07-23 05:11:16,912 INFO [RS:1;jenkins-hbase4:43649] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,912 INFO [RS:2;jenkins-hbase4:44971] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 05:11:16,913 DEBUG [RS:1;jenkins-hbase4:43649] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,913 DEBUG [RS:1;jenkins-hbase4:43649] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,914 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 05:11:16,914 DEBUG [RS:1;jenkins-hbase4:43649] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,914 DEBUG [RS:0;jenkins-hbase4:35827] zookeeper.ZKUtil(162): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44971,1690089076453 2023-07-23 05:11:16,914 DEBUG [RS:1;jenkins-hbase4:43649] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,914 DEBUG [RS:1;jenkins-hbase4:43649] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,914 DEBUG [RS:1;jenkins-hbase4:43649] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 05:11:16,914 DEBUG [RS:1;jenkins-hbase4:43649] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,914 DEBUG [RS:1;jenkins-hbase4:43649] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,914 DEBUG [RS:0;jenkins-hbase4:35827] zookeeper.ZKUtil(162): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43649,1690089076297 2023-07-23 05:11:16,914 DEBUG [RS:1;jenkins-hbase4:43649] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,914 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:16,914 DEBUG [RS:1;jenkins-hbase4:43649] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,915 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 05:11:16,915 DEBUG [RS:0;jenkins-hbase4:35827] zookeeper.ZKUtil(162): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:16,916 DEBUG [RS:0;jenkins-hbase4:35827] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 05:11:16,916 INFO [RS:0;jenkins-hbase4:35827] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 05:11:16,916 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/table 2023-07-23 05:11:16,916 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 05:11:16,917 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:16,921 INFO [RS:2;jenkins-hbase4:44971] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 05:11:16,921 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740 2023-07-23 05:11:16,921 INFO [RS:2;jenkins-hbase4:44971] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,921 INFO [RS:0;jenkins-hbase4:35827] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 05:11:16,923 INFO [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 05:11:16,923 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740 2023-07-23 05:11:16,925 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 05:11:16,927 INFO [RS:0;jenkins-hbase4:35827] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 05:11:16,927 INFO [RS:1;jenkins-hbase4:43649] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,927 INFO [RS:0;jenkins-hbase4:35827] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,927 INFO [RS:1;jenkins-hbase4:43649] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,928 INFO [RS:1;jenkins-hbase4:43649] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,928 INFO [RS:0;jenkins-hbase4:35827] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 05:11:16,929 INFO [RS:0;jenkins-hbase4:35827] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,930 DEBUG [RS:0;jenkins-hbase4:35827] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,930 DEBUG [RS:0;jenkins-hbase4:35827] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,930 INFO [RS:2;jenkins-hbase4:44971] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,930 DEBUG [RS:0;jenkins-hbase4:35827] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,930 DEBUG [RS:2;jenkins-hbase4:44971] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,930 DEBUG [RS:0;jenkins-hbase4:35827] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,930 DEBUG [RS:2;jenkins-hbase4:44971] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,930 DEBUG [RS:0;jenkins-hbase4:35827] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,930 DEBUG [RS:2;jenkins-hbase4:44971] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,930 DEBUG [RS:0;jenkins-hbase4:35827] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 05:11:16,930 DEBUG [RS:2;jenkins-hbase4:44971] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,930 DEBUG [RS:0;jenkins-hbase4:35827] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,930 DEBUG [RS:2;jenkins-hbase4:44971] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,930 DEBUG [RS:0;jenkins-hbase4:35827] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,930 DEBUG [RS:2;jenkins-hbase4:44971] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 05:11:16,931 DEBUG [RS:0;jenkins-hbase4:35827] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,931 DEBUG [RS:2;jenkins-hbase4:44971] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,931 DEBUG [RS:0;jenkins-hbase4:35827] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,931 DEBUG [RS:2;jenkins-hbase4:44971] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,931 DEBUG [RS:2;jenkins-hbase4:44971] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,931 DEBUG [RS:2;jenkins-hbase4:44971] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:16,933 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 05:11:16,938 INFO [RS:0;jenkins-hbase4:35827] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,938 INFO [RS:0;jenkins-hbase4:35827] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,939 INFO [RS:0;jenkins-hbase4:35827] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,939 INFO [RS:2;jenkins-hbase4:44971] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,939 INFO [RS:2;jenkins-hbase4:44971] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,939 INFO [RS:2;jenkins-hbase4:44971] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,944 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:11:16,945 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11647186400, jitterRate=0.08472876250743866}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 05:11:16,945 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 05:11:16,945 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 05:11:16,945 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 05:11:16,945 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 05:11:16,945 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 05:11:16,945 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 05:11:16,946 INFO [RS:1;jenkins-hbase4:43649] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 05:11:16,947 INFO [RS:1;jenkins-hbase4:43649] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43649,1690089076297-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,950 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 05:11:16,950 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 05:11:16,953 INFO [RS:0;jenkins-hbase4:35827] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 05:11:16,953 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-23 05:11:16,953 INFO [RS:0;jenkins-hbase4:35827] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35827,1690089076137-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,953 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-23 05:11:16,955 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-23 05:11:16,956 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-23 05:11:16,960 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-23 05:11:16,961 INFO [RS:2;jenkins-hbase4:44971] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 05:11:16,961 INFO [RS:2;jenkins-hbase4:44971] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44971,1690089076453-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:16,970 INFO [RS:1;jenkins-hbase4:43649] regionserver.Replication(203): jenkins-hbase4.apache.org,43649,1690089076297 started 2023-07-23 05:11:16,970 INFO [RS:1;jenkins-hbase4:43649] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43649,1690089076297, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43649, sessionid=0x101909737dc0002 2023-07-23 05:11:16,971 DEBUG [RS:1;jenkins-hbase4:43649] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 05:11:16,971 DEBUG [RS:1;jenkins-hbase4:43649] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43649,1690089076297 2023-07-23 05:11:16,971 DEBUG [RS:1;jenkins-hbase4:43649] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43649,1690089076297' 2023-07-23 05:11:16,971 DEBUG [RS:1;jenkins-hbase4:43649] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 05:11:16,974 DEBUG [RS:1;jenkins-hbase4:43649] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 05:11:16,975 DEBUG [RS:1;jenkins-hbase4:43649] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 05:11:16,975 DEBUG [RS:1;jenkins-hbase4:43649] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 05:11:16,975 DEBUG [RS:1;jenkins-hbase4:43649] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43649,1690089076297 2023-07-23 05:11:16,976 DEBUG [RS:1;jenkins-hbase4:43649] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43649,1690089076297' 2023-07-23 05:11:16,976 DEBUG [RS:1;jenkins-hbase4:43649] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 05:11:16,976 DEBUG [RS:1;jenkins-hbase4:43649] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 05:11:16,976 DEBUG [RS:1;jenkins-hbase4:43649] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 05:11:16,976 INFO [RS:1;jenkins-hbase4:43649] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 05:11:16,976 INFO [RS:1;jenkins-hbase4:43649] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 05:11:16,987 INFO [RS:2;jenkins-hbase4:44971] regionserver.Replication(203): jenkins-hbase4.apache.org,44971,1690089076453 started 2023-07-23 05:11:16,987 INFO [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44971,1690089076453, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44971, sessionid=0x101909737dc0003 2023-07-23 05:11:16,998 INFO [RS:0;jenkins-hbase4:35827] regionserver.Replication(203): jenkins-hbase4.apache.org,35827,1690089076137 started 2023-07-23 05:11:16,998 DEBUG [RS:2;jenkins-hbase4:44971] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 05:11:16,999 DEBUG [RS:2;jenkins-hbase4:44971] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44971,1690089076453 2023-07-23 05:11:16,999 INFO [RS:0;jenkins-hbase4:35827] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35827,1690089076137, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35827, sessionid=0x101909737dc0001 2023-07-23 05:11:16,999 DEBUG [RS:2;jenkins-hbase4:44971] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44971,1690089076453' 2023-07-23 05:11:16,999 DEBUG [RS:0;jenkins-hbase4:35827] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 05:11:16,999 DEBUG [RS:0;jenkins-hbase4:35827] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:16,999 DEBUG [RS:0;jenkins-hbase4:35827] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35827,1690089076137' 2023-07-23 05:11:16,999 DEBUG [RS:0;jenkins-hbase4:35827] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 05:11:16,999 DEBUG [RS:2;jenkins-hbase4:44971] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 05:11:16,999 DEBUG [RS:2;jenkins-hbase4:44971] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 05:11:16,999 DEBUG [RS:0;jenkins-hbase4:35827] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 05:11:17,000 DEBUG [RS:2;jenkins-hbase4:44971] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 05:11:17,000 DEBUG [RS:2;jenkins-hbase4:44971] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 05:11:17,000 DEBUG [RS:0;jenkins-hbase4:35827] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 05:11:17,000 DEBUG [RS:2;jenkins-hbase4:44971] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44971,1690089076453 2023-07-23 05:11:17,000 DEBUG [RS:2;jenkins-hbase4:44971] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44971,1690089076453' 2023-07-23 05:11:17,000 DEBUG [RS:2;jenkins-hbase4:44971] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 05:11:17,000 DEBUG [RS:0;jenkins-hbase4:35827] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 05:11:17,000 DEBUG [RS:0;jenkins-hbase4:35827] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:17,000 DEBUG [RS:0;jenkins-hbase4:35827] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35827,1690089076137' 2023-07-23 05:11:17,000 DEBUG [RS:0;jenkins-hbase4:35827] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 05:11:17,000 DEBUG [RS:2;jenkins-hbase4:44971] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 05:11:17,001 DEBUG [RS:0;jenkins-hbase4:35827] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 05:11:17,001 DEBUG [RS:2;jenkins-hbase4:44971] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 05:11:17,001 INFO [RS:2;jenkins-hbase4:44971] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 05:11:17,001 INFO [RS:2;jenkins-hbase4:44971] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 05:11:17,001 DEBUG [RS:0;jenkins-hbase4:35827] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 05:11:17,001 INFO [RS:0;jenkins-hbase4:35827] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 05:11:17,001 INFO [RS:0;jenkins-hbase4:35827] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 05:11:17,078 INFO [RS:1;jenkins-hbase4:43649] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43649%2C1690089076297, suffix=, logDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/WALs/jenkins-hbase4.apache.org,43649,1690089076297, archiveDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/oldWALs, maxLogs=32 2023-07-23 05:11:17,095 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42863,DS-6a1c8e59-5803-4e41-ad5b-ca1c403eb19b,DISK] 2023-07-23 05:11:17,096 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40503,DS-aec4ff92-dd92-46fd-b8b6-504567634b69,DISK] 2023-07-23 05:11:17,096 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40565,DS-454ed3e1-3fd5-469f-b62e-757e0072e1df,DISK] 2023-07-23 05:11:17,098 INFO [RS:1;jenkins-hbase4:43649] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/WALs/jenkins-hbase4.apache.org,43649,1690089076297/jenkins-hbase4.apache.org%2C43649%2C1690089076297.1690089077079 2023-07-23 05:11:17,098 DEBUG [RS:1;jenkins-hbase4:43649] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42863,DS-6a1c8e59-5803-4e41-ad5b-ca1c403eb19b,DISK], DatanodeInfoWithStorage[127.0.0.1:40565,DS-454ed3e1-3fd5-469f-b62e-757e0072e1df,DISK], DatanodeInfoWithStorage[127.0.0.1:40503,DS-aec4ff92-dd92-46fd-b8b6-504567634b69,DISK]] 2023-07-23 05:11:17,103 INFO [RS:2;jenkins-hbase4:44971] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44971%2C1690089076453, suffix=, logDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/WALs/jenkins-hbase4.apache.org,44971,1690089076453, archiveDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/oldWALs, maxLogs=32 2023-07-23 05:11:17,103 INFO [RS:0;jenkins-hbase4:35827] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35827%2C1690089076137, suffix=, logDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/WALs/jenkins-hbase4.apache.org,35827,1690089076137, archiveDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/oldWALs, maxLogs=32 2023-07-23 05:11:17,111 DEBUG [jenkins-hbase4:34969] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-23 05:11:17,111 DEBUG [jenkins-hbase4:34969] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:11:17,111 DEBUG [jenkins-hbase4:34969] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:11:17,111 DEBUG [jenkins-hbase4:34969] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:11:17,111 DEBUG [jenkins-hbase4:34969] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:11:17,111 DEBUG [jenkins-hbase4:34969] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:11:17,112 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44971,1690089076453, state=OPENING 2023-07-23 05:11:17,113 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-23 05:11:17,115 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:17,116 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44971,1690089076453}] 2023-07-23 05:11:17,116 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 05:11:17,124 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40503,DS-aec4ff92-dd92-46fd-b8b6-504567634b69,DISK] 2023-07-23 05:11:17,124 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42863,DS-6a1c8e59-5803-4e41-ad5b-ca1c403eb19b,DISK] 2023-07-23 05:11:17,125 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40565,DS-454ed3e1-3fd5-469f-b62e-757e0072e1df,DISK] 2023-07-23 05:11:17,132 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42863,DS-6a1c8e59-5803-4e41-ad5b-ca1c403eb19b,DISK] 2023-07-23 05:11:17,134 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40565,DS-454ed3e1-3fd5-469f-b62e-757e0072e1df,DISK] 2023-07-23 05:11:17,134 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40503,DS-aec4ff92-dd92-46fd-b8b6-504567634b69,DISK] 2023-07-23 05:11:17,136 INFO [RS:0;jenkins-hbase4:35827] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/WALs/jenkins-hbase4.apache.org,35827,1690089076137/jenkins-hbase4.apache.org%2C35827%2C1690089076137.1690089077103 2023-07-23 05:11:17,137 DEBUG [RS:0;jenkins-hbase4:35827] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40565,DS-454ed3e1-3fd5-469f-b62e-757e0072e1df,DISK], DatanodeInfoWithStorage[127.0.0.1:40503,DS-aec4ff92-dd92-46fd-b8b6-504567634b69,DISK], DatanodeInfoWithStorage[127.0.0.1:42863,DS-6a1c8e59-5803-4e41-ad5b-ca1c403eb19b,DISK]] 2023-07-23 05:11:17,139 INFO [RS:2;jenkins-hbase4:44971] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/WALs/jenkins-hbase4.apache.org,44971,1690089076453/jenkins-hbase4.apache.org%2C44971%2C1690089076453.1690089077103 2023-07-23 05:11:17,141 DEBUG [RS:2;jenkins-hbase4:44971] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40503,DS-aec4ff92-dd92-46fd-b8b6-504567634b69,DISK], DatanodeInfoWithStorage[127.0.0.1:42863,DS-6a1c8e59-5803-4e41-ad5b-ca1c403eb19b,DISK], DatanodeInfoWithStorage[127.0.0.1:40565,DS-454ed3e1-3fd5-469f-b62e-757e0072e1df,DISK]] 2023-07-23 05:11:17,277 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44971,1690089076453 2023-07-23 05:11:17,277 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 05:11:17,278 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49902, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 05:11:17,282 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-23 05:11:17,282 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 05:11:17,284 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44971%2C1690089076453.meta, suffix=.meta, logDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/WALs/jenkins-hbase4.apache.org,44971,1690089076453, archiveDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/oldWALs, maxLogs=32 2023-07-23 05:11:17,298 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40503,DS-aec4ff92-dd92-46fd-b8b6-504567634b69,DISK] 2023-07-23 05:11:17,298 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40565,DS-454ed3e1-3fd5-469f-b62e-757e0072e1df,DISK] 2023-07-23 05:11:17,298 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42863,DS-6a1c8e59-5803-4e41-ad5b-ca1c403eb19b,DISK] 2023-07-23 05:11:17,303 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/WALs/jenkins-hbase4.apache.org,44971,1690089076453/jenkins-hbase4.apache.org%2C44971%2C1690089076453.meta.1690089077285.meta 2023-07-23 05:11:17,307 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40503,DS-aec4ff92-dd92-46fd-b8b6-504567634b69,DISK], DatanodeInfoWithStorage[127.0.0.1:40565,DS-454ed3e1-3fd5-469f-b62e-757e0072e1df,DISK], DatanodeInfoWithStorage[127.0.0.1:42863,DS-6a1c8e59-5803-4e41-ad5b-ca1c403eb19b,DISK]] 2023-07-23 05:11:17,308 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:11:17,308 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 05:11:17,308 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-23 05:11:17,308 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-23 05:11:17,308 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-23 05:11:17,308 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:17,308 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-23 05:11:17,308 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-23 05:11:17,310 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-23 05:11:17,311 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/info 2023-07-23 05:11:17,311 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/info 2023-07-23 05:11:17,311 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-23 05:11:17,311 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:17,312 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-23 05:11:17,312 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/rep_barrier 2023-07-23 05:11:17,312 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/rep_barrier 2023-07-23 05:11:17,313 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-23 05:11:17,313 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:17,313 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-23 05:11:17,314 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/table 2023-07-23 05:11:17,314 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/table 2023-07-23 05:11:17,314 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-23 05:11:17,315 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:17,315 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740 2023-07-23 05:11:17,316 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740 2023-07-23 05:11:17,318 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-23 05:11:17,319 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-23 05:11:17,319 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11672552480, jitterRate=0.08709116280078888}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-23 05:11:17,319 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-23 05:11:17,320 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690089077277 2023-07-23 05:11:17,326 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-23 05:11:17,326 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-23 05:11:17,327 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44971,1690089076453, state=OPEN 2023-07-23 05:11:17,329 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-23 05:11:17,329 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-23 05:11:17,331 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-23 05:11:17,331 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44971,1690089076453 in 214 msec 2023-07-23 05:11:17,332 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-23 05:11:17,332 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 376 msec 2023-07-23 05:11:17,333 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 560 msec 2023-07-23 05:11:17,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690089077334, completionTime=-1 2023-07-23 05:11:17,334 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-23 05:11:17,334 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-23 05:11:17,336 DEBUG [hconnection-0x7856e46e-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 05:11:17,338 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49914, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 05:11:17,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-23 05:11:17,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690089137339 2023-07-23 05:11:17,339 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690089197339 2023-07-23 05:11:17,340 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-07-23 05:11:17,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34969,1690089075958-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:17,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34969,1690089075958-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:17,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34969,1690089075958-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:17,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:34969, period=300000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:17,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:17,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-23 05:11:17,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-23 05:11:17,349 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-23 05:11:17,349 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-23 05:11:17,351 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 05:11:17,351 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 05:11:17,353 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp/data/hbase/namespace/556139edd0141ac2f7d66b7c7bb9ba5f 2023-07-23 05:11:17,354 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp/data/hbase/namespace/556139edd0141ac2f7d66b7c7bb9ba5f empty. 2023-07-23 05:11:17,354 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp/data/hbase/namespace/556139edd0141ac2f7d66b7c7bb9ba5f 2023-07-23 05:11:17,354 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-23 05:11:17,370 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-23 05:11:17,371 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 556139edd0141ac2f7d66b7c7bb9ba5f, NAME => 'hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp 2023-07-23 05:11:17,384 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:17,384 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 556139edd0141ac2f7d66b7c7bb9ba5f, disabling compactions & flushes 2023-07-23 05:11:17,384 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f. 2023-07-23 05:11:17,384 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f. 2023-07-23 05:11:17,384 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f. after waiting 0 ms 2023-07-23 05:11:17,384 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f. 2023-07-23 05:11:17,384 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f. 2023-07-23 05:11:17,384 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 556139edd0141ac2f7d66b7c7bb9ba5f: 2023-07-23 05:11:17,387 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 05:11:17,388 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690089077388"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089077388"}]},"ts":"1690089077388"} 2023-07-23 05:11:17,391 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 05:11:17,391 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 05:11:17,391 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089077391"}]},"ts":"1690089077391"} 2023-07-23 05:11:17,394 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34969,1690089075958] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 05:11:17,396 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-23 05:11:17,399 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34969,1690089075958] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-23 05:11:17,400 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:11:17,400 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:11:17,400 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:11:17,400 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:11:17,400 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:11:17,401 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=556139edd0141ac2f7d66b7c7bb9ba5f, ASSIGN}] 2023-07-23 05:11:17,402 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 05:11:17,402 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=556139edd0141ac2f7d66b7c7bb9ba5f, ASSIGN 2023-07-23 05:11:17,403 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 05:11:17,403 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=556139edd0141ac2f7d66b7c7bb9ba5f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43649,1690089076297; forceNewPlan=false, retain=false 2023-07-23 05:11:17,404 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp/data/hbase/rsgroup/6bca6539cf64287f2b8b50b4d00a25f5 2023-07-23 05:11:17,405 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp/data/hbase/rsgroup/6bca6539cf64287f2b8b50b4d00a25f5 empty. 2023-07-23 05:11:17,405 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp/data/hbase/rsgroup/6bca6539cf64287f2b8b50b4d00a25f5 2023-07-23 05:11:17,405 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-23 05:11:17,429 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-23 05:11:17,430 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 6bca6539cf64287f2b8b50b4d00a25f5, NAME => 'hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp 2023-07-23 05:11:17,443 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:17,443 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 6bca6539cf64287f2b8b50b4d00a25f5, disabling compactions & flushes 2023-07-23 05:11:17,443 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5. 2023-07-23 05:11:17,443 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5. 2023-07-23 05:11:17,443 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5. after waiting 0 ms 2023-07-23 05:11:17,443 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5. 2023-07-23 05:11:17,444 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5. 2023-07-23 05:11:17,444 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 6bca6539cf64287f2b8b50b4d00a25f5: 2023-07-23 05:11:17,446 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 05:11:17,447 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690089077447"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089077447"}]},"ts":"1690089077447"} 2023-07-23 05:11:17,448 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 05:11:17,449 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 05:11:17,449 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089077449"}]},"ts":"1690089077449"} 2023-07-23 05:11:17,450 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-23 05:11:17,454 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:11:17,454 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:11:17,454 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:11:17,454 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:11:17,454 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:11:17,454 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=6bca6539cf64287f2b8b50b4d00a25f5, ASSIGN}] 2023-07-23 05:11:17,456 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=6bca6539cf64287f2b8b50b4d00a25f5, ASSIGN 2023-07-23 05:11:17,457 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=6bca6539cf64287f2b8b50b4d00a25f5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44971,1690089076453; forceNewPlan=false, retain=false 2023-07-23 05:11:17,457 INFO [jenkins-hbase4:34969] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-23 05:11:17,459 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=556139edd0141ac2f7d66b7c7bb9ba5f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43649,1690089076297 2023-07-23 05:11:17,459 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690089077459"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089077459"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089077459"}]},"ts":"1690089077459"} 2023-07-23 05:11:17,459 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=6bca6539cf64287f2b8b50b4d00a25f5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44971,1690089076453 2023-07-23 05:11:17,459 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690089077459"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089077459"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089077459"}]},"ts":"1690089077459"} 2023-07-23 05:11:17,460 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 556139edd0141ac2f7d66b7c7bb9ba5f, server=jenkins-hbase4.apache.org,43649,1690089076297}] 2023-07-23 05:11:17,461 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 6bca6539cf64287f2b8b50b4d00a25f5, server=jenkins-hbase4.apache.org,44971,1690089076453}] 2023-07-23 05:11:17,613 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43649,1690089076297 2023-07-23 05:11:17,613 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 05:11:17,615 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56324, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 05:11:17,618 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5. 2023-07-23 05:11:17,619 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f. 2023-07-23 05:11:17,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 6bca6539cf64287f2b8b50b4d00a25f5, NAME => 'hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:11:17,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 556139edd0141ac2f7d66b7c7bb9ba5f, NAME => 'hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:11:17,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-23 05:11:17,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5. service=MultiRowMutationService 2023-07-23 05:11:17,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 556139edd0141ac2f7d66b7c7bb9ba5f 2023-07-23 05:11:17,619 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-23 05:11:17,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:17,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 6bca6539cf64287f2b8b50b4d00a25f5 2023-07-23 05:11:17,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 556139edd0141ac2f7d66b7c7bb9ba5f 2023-07-23 05:11:17,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:17,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 556139edd0141ac2f7d66b7c7bb9ba5f 2023-07-23 05:11:17,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 6bca6539cf64287f2b8b50b4d00a25f5 2023-07-23 05:11:17,619 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 6bca6539cf64287f2b8b50b4d00a25f5 2023-07-23 05:11:17,621 INFO [StoreOpener-556139edd0141ac2f7d66b7c7bb9ba5f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 556139edd0141ac2f7d66b7c7bb9ba5f 2023-07-23 05:11:17,623 INFO [StoreOpener-6bca6539cf64287f2b8b50b4d00a25f5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 6bca6539cf64287f2b8b50b4d00a25f5 2023-07-23 05:11:17,624 DEBUG [StoreOpener-556139edd0141ac2f7d66b7c7bb9ba5f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/namespace/556139edd0141ac2f7d66b7c7bb9ba5f/info 2023-07-23 05:11:17,624 DEBUG [StoreOpener-556139edd0141ac2f7d66b7c7bb9ba5f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/namespace/556139edd0141ac2f7d66b7c7bb9ba5f/info 2023-07-23 05:11:17,624 INFO [StoreOpener-556139edd0141ac2f7d66b7c7bb9ba5f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 556139edd0141ac2f7d66b7c7bb9ba5f columnFamilyName info 2023-07-23 05:11:17,624 DEBUG [StoreOpener-6bca6539cf64287f2b8b50b4d00a25f5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/rsgroup/6bca6539cf64287f2b8b50b4d00a25f5/m 2023-07-23 05:11:17,624 DEBUG [StoreOpener-6bca6539cf64287f2b8b50b4d00a25f5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/rsgroup/6bca6539cf64287f2b8b50b4d00a25f5/m 2023-07-23 05:11:17,625 INFO [StoreOpener-6bca6539cf64287f2b8b50b4d00a25f5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 6bca6539cf64287f2b8b50b4d00a25f5 columnFamilyName m 2023-07-23 05:11:17,625 INFO [StoreOpener-556139edd0141ac2f7d66b7c7bb9ba5f-1] regionserver.HStore(310): Store=556139edd0141ac2f7d66b7c7bb9ba5f/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:17,625 INFO [StoreOpener-6bca6539cf64287f2b8b50b4d00a25f5-1] regionserver.HStore(310): Store=6bca6539cf64287f2b8b50b4d00a25f5/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:17,626 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/namespace/556139edd0141ac2f7d66b7c7bb9ba5f 2023-07-23 05:11:17,626 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/rsgroup/6bca6539cf64287f2b8b50b4d00a25f5 2023-07-23 05:11:17,626 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/namespace/556139edd0141ac2f7d66b7c7bb9ba5f 2023-07-23 05:11:17,626 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/rsgroup/6bca6539cf64287f2b8b50b4d00a25f5 2023-07-23 05:11:17,629 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 556139edd0141ac2f7d66b7c7bb9ba5f 2023-07-23 05:11:17,629 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 6bca6539cf64287f2b8b50b4d00a25f5 2023-07-23 05:11:17,634 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/namespace/556139edd0141ac2f7d66b7c7bb9ba5f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:11:17,635 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 556139edd0141ac2f7d66b7c7bb9ba5f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9911492640, jitterRate=-0.07692031562328339}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:11:17,635 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 556139edd0141ac2f7d66b7c7bb9ba5f: 2023-07-23 05:11:17,635 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/rsgroup/6bca6539cf64287f2b8b50b4d00a25f5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:11:17,636 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 6bca6539cf64287f2b8b50b4d00a25f5; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@1047d2d5, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:11:17,636 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 6bca6539cf64287f2b8b50b4d00a25f5: 2023-07-23 05:11:17,637 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f., pid=8, masterSystemTime=1690089077613 2023-07-23 05:11:17,637 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5., pid=9, masterSystemTime=1690089077614 2023-07-23 05:11:17,640 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5. 2023-07-23 05:11:17,640 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5. 2023-07-23 05:11:17,641 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=6bca6539cf64287f2b8b50b4d00a25f5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44971,1690089076453 2023-07-23 05:11:17,641 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690089077641"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089077641"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089077641"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089077641"}]},"ts":"1690089077641"} 2023-07-23 05:11:17,641 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f. 2023-07-23 05:11:17,642 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f. 2023-07-23 05:11:17,642 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=556139edd0141ac2f7d66b7c7bb9ba5f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43649,1690089076297 2023-07-23 05:11:17,642 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690089077642"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089077642"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089077642"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089077642"}]},"ts":"1690089077642"} 2023-07-23 05:11:17,649 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-23 05:11:17,649 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 6bca6539cf64287f2b8b50b4d00a25f5, server=jenkins-hbase4.apache.org,44971,1690089076453 in 186 msec 2023-07-23 05:11:17,650 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-23 05:11:17,650 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 556139edd0141ac2f7d66b7c7bb9ba5f, server=jenkins-hbase4.apache.org,43649,1690089076297 in 188 msec 2023-07-23 05:11:17,651 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-23 05:11:17,651 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=6bca6539cf64287f2b8b50b4d00a25f5, ASSIGN in 195 msec 2023-07-23 05:11:17,651 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-23 05:11:17,651 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=556139edd0141ac2f7d66b7c7bb9ba5f, ASSIGN in 249 msec 2023-07-23 05:11:17,652 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 05:11:17,652 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089077652"}]},"ts":"1690089077652"} 2023-07-23 05:11:17,652 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 05:11:17,652 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089077652"}]},"ts":"1690089077652"} 2023-07-23 05:11:17,653 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-23 05:11:17,654 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-23 05:11:17,656 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 05:11:17,657 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 05:11:17,657 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 262 msec 2023-07-23 05:11:17,658 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 308 msec 2023-07-23 05:11:17,703 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34969,1690089075958] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-23 05:11:17,703 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34969,1690089075958] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-23 05:11:17,708 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:17,708 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34969,1690089075958] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:17,710 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34969,1690089075958] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 05:11:17,711 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,34969,1690089075958] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-23 05:11:17,750 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-23 05:11:17,752 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-23 05:11:17,752 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:17,755 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 05:11:17,758 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56336, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 05:11:17,760 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-23 05:11:17,769 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 05:11:17,772 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 12 msec 2023-07-23 05:11:17,782 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-23 05:11:17,788 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 05:11:17,791 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-07-23 05:11:17,802 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-23 05:11:17,804 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-23 05:11:17,804 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.187sec 2023-07-23 05:11:17,808 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-23 05:11:17,808 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-23 05:11:17,808 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-23 05:11:17,808 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34969,1690089075958-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-23 05:11:17,808 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34969,1690089075958-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-23 05:11:17,817 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-23 05:11:17,820 DEBUG [Listener at localhost/46717] zookeeper.ReadOnlyZKClient(139): Connect 0x01fb70e0 to 127.0.0.1:51330 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:11:17,827 DEBUG [Listener at localhost/46717] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a21632f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:11:17,828 DEBUG [hconnection-0x4f45754a-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 05:11:17,831 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49918, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 05:11:17,833 INFO [Listener at localhost/46717] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,34969,1690089075958 2023-07-23 05:11:17,833 INFO [Listener at localhost/46717] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:17,835 DEBUG [Listener at localhost/46717] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-23 05:11:17,839 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48394, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-23 05:11:17,842 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-23 05:11:17,842 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:17,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-23 05:11:17,843 DEBUG [Listener at localhost/46717] zookeeper.ReadOnlyZKClient(139): Connect 0x53254b14 to 127.0.0.1:51330 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:11:17,848 DEBUG [Listener at localhost/46717] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@37ae9b1f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:11:17,849 INFO [Listener at localhost/46717] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:51330 2023-07-23 05:11:17,855 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 05:11:17,856 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x101909737dc000a connected 2023-07-23 05:11:17,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:17,857 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:17,860 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-23 05:11:17,872 INFO [Listener at localhost/46717] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-23 05:11:17,872 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:17,872 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:17,872 INFO [Listener at localhost/46717] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-23 05:11:17,872 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-23 05:11:17,873 INFO [Listener at localhost/46717] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-23 05:11:17,873 INFO [Listener at localhost/46717] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-23 05:11:17,873 INFO [Listener at localhost/46717] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39353 2023-07-23 05:11:17,874 INFO [Listener at localhost/46717] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-23 05:11:17,875 DEBUG [Listener at localhost/46717] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-23 05:11:17,875 INFO [Listener at localhost/46717] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:17,877 INFO [Listener at localhost/46717] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-23 05:11:17,878 INFO [Listener at localhost/46717] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39353 connecting to ZooKeeper ensemble=127.0.0.1:51330 2023-07-23 05:11:17,881 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:393530x0, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-23 05:11:17,883 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39353-0x101909737dc000b connected 2023-07-23 05:11:17,883 DEBUG [Listener at localhost/46717] zookeeper.ZKUtil(162): regionserver:39353-0x101909737dc000b, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-23 05:11:17,884 DEBUG [Listener at localhost/46717] zookeeper.ZKUtil(162): regionserver:39353-0x101909737dc000b, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-23 05:11:17,884 DEBUG [Listener at localhost/46717] zookeeper.ZKUtil(164): regionserver:39353-0x101909737dc000b, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-23 05:11:17,889 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39353 2023-07-23 05:11:17,889 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39353 2023-07-23 05:11:17,890 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39353 2023-07-23 05:11:17,891 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39353 2023-07-23 05:11:17,891 DEBUG [Listener at localhost/46717] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39353 2023-07-23 05:11:17,893 INFO [Listener at localhost/46717] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-23 05:11:17,893 INFO [Listener at localhost/46717] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-23 05:11:17,893 INFO [Listener at localhost/46717] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-23 05:11:17,894 INFO [Listener at localhost/46717] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-23 05:11:17,894 INFO [Listener at localhost/46717] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-23 05:11:17,894 INFO [Listener at localhost/46717] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-23 05:11:17,894 INFO [Listener at localhost/46717] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-23 05:11:17,894 INFO [Listener at localhost/46717] http.HttpServer(1146): Jetty bound to port 36455 2023-07-23 05:11:17,894 INFO [Listener at localhost/46717] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-23 05:11:17,897 INFO [Listener at localhost/46717] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:17,897 INFO [Listener at localhost/46717] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@672c0920{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/hadoop.log.dir/,AVAILABLE} 2023-07-23 05:11:17,897 INFO [Listener at localhost/46717] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:17,897 INFO [Listener at localhost/46717] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7f41fb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-23 05:11:18,011 INFO [Listener at localhost/46717] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-23 05:11:18,011 INFO [Listener at localhost/46717] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-23 05:11:18,012 INFO [Listener at localhost/46717] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-23 05:11:18,012 INFO [Listener at localhost/46717] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-23 05:11:18,012 INFO [Listener at localhost/46717] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-23 05:11:18,013 INFO [Listener at localhost/46717] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@15e635d9{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/java.io.tmpdir/jetty-0_0_0_0-36455-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8844764980408508880/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:11:18,015 INFO [Listener at localhost/46717] server.AbstractConnector(333): Started ServerConnector@42e59d1f{HTTP/1.1, (http/1.1)}{0.0.0.0:36455} 2023-07-23 05:11:18,015 INFO [Listener at localhost/46717] server.Server(415): Started @42855ms 2023-07-23 05:11:18,017 INFO [RS:3;jenkins-hbase4:39353] regionserver.HRegionServer(951): ClusterId : b3c294d6-bf9a-47ec-a205-e8c34f4cecc4 2023-07-23 05:11:18,017 DEBUG [RS:3;jenkins-hbase4:39353] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-23 05:11:18,019 DEBUG [RS:3;jenkins-hbase4:39353] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-23 05:11:18,019 DEBUG [RS:3;jenkins-hbase4:39353] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-23 05:11:18,021 DEBUG [RS:3;jenkins-hbase4:39353] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-23 05:11:18,024 DEBUG [RS:3;jenkins-hbase4:39353] zookeeper.ReadOnlyZKClient(139): Connect 0x20a3c930 to 127.0.0.1:51330 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-23 05:11:18,028 DEBUG [RS:3;jenkins-hbase4:39353] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2e643354, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-23 05:11:18,029 DEBUG [RS:3;jenkins-hbase4:39353] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a5ba240, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 05:11:18,042 DEBUG [RS:3;jenkins-hbase4:39353] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:39353 2023-07-23 05:11:18,042 INFO [RS:3;jenkins-hbase4:39353] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-23 05:11:18,042 INFO [RS:3;jenkins-hbase4:39353] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-23 05:11:18,042 DEBUG [RS:3;jenkins-hbase4:39353] regionserver.HRegionServer(1022): About to register with Master. 2023-07-23 05:11:18,043 INFO [RS:3;jenkins-hbase4:39353] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,34969,1690089075958 with isa=jenkins-hbase4.apache.org/172.31.14.131:39353, startcode=1690089077871 2023-07-23 05:11:18,043 DEBUG [RS:3;jenkins-hbase4:39353] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-23 05:11:18,045 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50893, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-23 05:11:18,046 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34969] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39353,1690089077871 2023-07-23 05:11:18,046 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34969,1690089075958] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-23 05:11:18,046 DEBUG [RS:3;jenkins-hbase4:39353] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72 2023-07-23 05:11:18,046 DEBUG [RS:3;jenkins-hbase4:39353] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:37269 2023-07-23 05:11:18,046 DEBUG [RS:3;jenkins-hbase4:39353] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=42277 2023-07-23 05:11:18,052 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:18,052 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:18,052 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34969,1690089075958] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:18,052 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:18,052 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:18,052 DEBUG [RS:3;jenkins-hbase4:39353] zookeeper.ZKUtil(162): regionserver:39353-0x101909737dc000b, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39353,1690089077871 2023-07-23 05:11:18,052 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34969,1690089075958] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-23 05:11:18,052 WARN [RS:3;jenkins-hbase4:39353] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-23 05:11:18,053 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39353,1690089077871] 2023-07-23 05:11:18,053 INFO [RS:3;jenkins-hbase4:39353] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-23 05:11:18,053 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44971,1690089076453 2023-07-23 05:11:18,053 DEBUG [RS:3;jenkins-hbase4:39353] regionserver.HRegionServer(1948): logDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/WALs/jenkins-hbase4.apache.org,39353,1690089077871 2023-07-23 05:11:18,053 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44971,1690089076453 2023-07-23 05:11:18,053 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44971,1690089076453 2023-07-23 05:11:18,055 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43649,1690089076297 2023-07-23 05:11:18,055 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34969,1690089075958] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-23 05:11:18,055 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43649,1690089076297 2023-07-23 05:11:18,055 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43649,1690089076297 2023-07-23 05:11:18,055 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39353,1690089077871 2023-07-23 05:11:18,055 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39353,1690089077871 2023-07-23 05:11:18,055 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39353,1690089077871 2023-07-23 05:11:18,056 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:18,056 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:18,057 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:18,058 DEBUG [RS:3;jenkins-hbase4:39353] zookeeper.ZKUtil(162): regionserver:39353-0x101909737dc000b, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44971,1690089076453 2023-07-23 05:11:18,058 DEBUG [RS:3;jenkins-hbase4:39353] zookeeper.ZKUtil(162): regionserver:39353-0x101909737dc000b, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43649,1690089076297 2023-07-23 05:11:18,058 DEBUG [RS:3;jenkins-hbase4:39353] zookeeper.ZKUtil(162): regionserver:39353-0x101909737dc000b, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39353,1690089077871 2023-07-23 05:11:18,059 DEBUG [RS:3;jenkins-hbase4:39353] zookeeper.ZKUtil(162): regionserver:39353-0x101909737dc000b, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:18,059 DEBUG [RS:3;jenkins-hbase4:39353] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-23 05:11:18,059 INFO [RS:3;jenkins-hbase4:39353] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-23 05:11:18,060 INFO [RS:3;jenkins-hbase4:39353] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-23 05:11:18,061 INFO [RS:3;jenkins-hbase4:39353] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-23 05:11:18,061 INFO [RS:3;jenkins-hbase4:39353] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:18,061 INFO [RS:3;jenkins-hbase4:39353] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-23 05:11:18,062 INFO [RS:3;jenkins-hbase4:39353] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:18,063 DEBUG [RS:3;jenkins-hbase4:39353] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:18,063 DEBUG [RS:3;jenkins-hbase4:39353] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:18,063 DEBUG [RS:3;jenkins-hbase4:39353] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:18,063 DEBUG [RS:3;jenkins-hbase4:39353] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:18,063 DEBUG [RS:3;jenkins-hbase4:39353] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:18,063 DEBUG [RS:3;jenkins-hbase4:39353] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-23 05:11:18,063 DEBUG [RS:3;jenkins-hbase4:39353] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:18,063 DEBUG [RS:3;jenkins-hbase4:39353] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:18,063 DEBUG [RS:3;jenkins-hbase4:39353] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:18,063 DEBUG [RS:3;jenkins-hbase4:39353] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-23 05:11:18,064 INFO [RS:3;jenkins-hbase4:39353] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:18,064 INFO [RS:3;jenkins-hbase4:39353] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:18,064 INFO [RS:3;jenkins-hbase4:39353] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:18,078 INFO [RS:3;jenkins-hbase4:39353] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-23 05:11:18,078 INFO [RS:3;jenkins-hbase4:39353] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39353,1690089077871-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-23 05:11:18,093 INFO [RS:3;jenkins-hbase4:39353] regionserver.Replication(203): jenkins-hbase4.apache.org,39353,1690089077871 started 2023-07-23 05:11:18,093 INFO [RS:3;jenkins-hbase4:39353] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39353,1690089077871, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39353, sessionid=0x101909737dc000b 2023-07-23 05:11:18,093 DEBUG [RS:3;jenkins-hbase4:39353] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-23 05:11:18,093 DEBUG [RS:3;jenkins-hbase4:39353] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39353,1690089077871 2023-07-23 05:11:18,093 DEBUG [RS:3;jenkins-hbase4:39353] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39353,1690089077871' 2023-07-23 05:11:18,093 DEBUG [RS:3;jenkins-hbase4:39353] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-23 05:11:18,094 DEBUG [RS:3;jenkins-hbase4:39353] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-23 05:11:18,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:11:18,094 DEBUG [RS:3;jenkins-hbase4:39353] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-23 05:11:18,094 DEBUG [RS:3;jenkins-hbase4:39353] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-23 05:11:18,094 DEBUG [RS:3;jenkins-hbase4:39353] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39353,1690089077871 2023-07-23 05:11:18,094 DEBUG [RS:3;jenkins-hbase4:39353] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39353,1690089077871' 2023-07-23 05:11:18,094 DEBUG [RS:3;jenkins-hbase4:39353] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-23 05:11:18,095 DEBUG [RS:3;jenkins-hbase4:39353] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-23 05:11:18,095 DEBUG [RS:3;jenkins-hbase4:39353] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-23 05:11:18,095 INFO [RS:3;jenkins-hbase4:39353] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-23 05:11:18,095 INFO [RS:3;jenkins-hbase4:39353] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-23 05:11:18,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:18,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:18,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:18,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:18,108 DEBUG [hconnection-0x4cb027d1-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-23 05:11:18,109 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49932, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-23 05:11:18,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:18,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:18,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34969] to rsgroup master 2023-07-23 05:11:18,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:18,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:48394 deadline: 1690090278117, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. 2023-07-23 05:11:18,118 WARN [Listener at localhost/46717] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:11:18,119 INFO [Listener at localhost/46717] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:18,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:18,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:18,120 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35827, jenkins-hbase4.apache.org:39353, jenkins-hbase4.apache.org:43649, jenkins-hbase4.apache.org:44971], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:11:18,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:18,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:18,172 INFO [Listener at localhost/46717] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=560 (was 513) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1197641724-172.31.14.131-1690089075257:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72-prefix:jenkins-hbase4.apache.org,35827,1690089076137 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1278153789@qtp-1331529307-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/46717-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Session-HouseKeeper-298e7858-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-c71127-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp709633630-2237 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1159389665-2299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@315029cc sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4f45754a-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/46717-SendThread(127.0.0.1:51330) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: jenkins-hbase4:35827Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=39353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: M:0;jenkins-hbase4:34969 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1197641724-172.31.14.131-1690089075257:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x222ba5ad-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: PacketResponder: BP-1197641724-172.31.14.131-1690089075257:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44971 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x7856e46e-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1536065528@qtp-837748672-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:43833 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=34969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1073590046) connection to localhost/127.0.0.1:44369 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/46717.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x446486fd-SendThread(127.0.0.1:51330) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60906@0x4162bfb9-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=39353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44971 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:44369 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44971 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x7856e46e-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp709633630-2233 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1057184077.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 37269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/MasterData-prefix:jenkins-hbase4.apache.org,34969,1690089075958 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_677690380_17 at /127.0.0.1:49374 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp709633630-2234-acceptor-0@12973fef-ServerConnector@70813e9d{HTTP/1.1, (http/1.1)}{0.0.0.0:46767} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=39353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44971 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/46717-SendThread(127.0.0.1:51330) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS:2;jenkins-hbase4:44971-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 46717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: Listener at localhost/46717-SendThread(127.0.0.1:51330) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=34969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp724552029-2207 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1073590046) connection to localhost/127.0.0.1:37269 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@7e374ea1 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins@localhost:44369 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 38451 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS:3;jenkins-hbase4:39353 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 46717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp724552029-2204 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:44971Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1175931123-2576 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:37269 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 46717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1197641724-172.31.14.131-1690089075257:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1159389665-2295 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@6880c8bd[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1159389665-2298 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34851,1690089070495 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: IPC Client (1073590046) connection to localhost/127.0.0.1:44369 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Client (1073590046) connection to localhost/127.0.0.1:37269 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72-prefix:jenkins-hbase4.apache.org,44971,1690089076453.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/46717-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/dfs/data/data4/current/BP-1197641724-172.31.14.131-1690089075257 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1159389665-2297 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x20a3c930 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/792938361.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:39353Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: qtp1091847445-2306 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1057184077.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/46717-SendThread(127.0.0.1:51330) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ForkJoinPool-2-worker-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=34969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/dfs/data/data1/current/BP-1197641724-172.31.14.131-1690089075257 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,34969,1690089075958 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: qtp1846640454-2264-acceptor-0@63e827e1-ServerConnector@60a59e1f{HTTP/1.1, (http/1.1)}{0.0.0.0:44685} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1846640454-2265 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-560-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 38451 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72-prefix:jenkins-hbase4.apache.org,44971,1690089076453 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1846640454-2270 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1197641724-172.31.14.131-1690089075257:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34155-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1091847445-2304 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1057184077.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 196436652@qtp-704575498-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x7f20dc29 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/792938361.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-551-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1159389665-2293 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1057184077.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp724552029-2205 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp724552029-2208 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp724552029-2206 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_677690380_17 at /127.0.0.1:53738 [Receiving block BP-1197641724-172.31.14.131-1690089075257:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1159389665-2296 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 37269 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:44369 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/46717.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x53254b14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/792938361.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=39353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=39353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (1073590046) connection to localhost/127.0.0.1:44369 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Listener at localhost/46717-SendThread(127.0.0.1:51330) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ForkJoinPool-2-worker-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-630092662_17 at /127.0.0.1:53766 [Receiving block BP-1197641724-172.31.14.131-1690089075257:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/46717-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1091847445-2309 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7856e46e-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:43649 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1846640454-2266 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1159389665-2300 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=34969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:44369 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=39353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-102660338_17 at /127.0.0.1:53704 [Receiving block BP-1197641724-172.31.14.131-1690089075257:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1846640454-2268 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1846640454-2267 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-41329130_17 at /127.0.0.1:53778 [Receiving block BP-1197641724-172.31.14.131-1690089075257:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp724552029-2209 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-3ac747bc-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 46717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=39353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Session-HouseKeeper-5abccefa-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 4 on default port 37269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1091847445-2305 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1057184077.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1073590046) connection to localhost/127.0.0.1:37269 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp709633630-2240 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1073590046) connection to localhost/127.0.0.1:37269 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 4 on default port 38451 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7856e46e-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:44971 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 33363 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x01fb70e0-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-565-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x20a3c930-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@49b52f3f java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1091847445-2307 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1057184077.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@ed2996e java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:37269 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x4cb027d1-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-102660338_17 at /127.0.0.1:53724 [Receiving block BP-1197641724-172.31.14.131-1690089075257:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/dfs/data/data5/current/BP-1197641724-172.31.14.131-1690089075257 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1197641724-172.31.14.131-1690089075257:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60906@0x4162bfb9-SendThread(127.0.0.1:60906) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: IPC Client (1073590046) connection to localhost/127.0.0.1:44369 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/dfs/data/data3/current/BP-1197641724-172.31.14.131-1690089075257 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:43649Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1197641724-172.31.14.131-1690089075257:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-540-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/34155-SendThread(127.0.0.1:60906) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp709633630-2236 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 33363 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@7c727e91 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-630092662_17 at /127.0.0.1:49312 [Receiving block BP-1197641724-172.31.14.131-1690089075257:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1091847445-2311 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 37269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Server handler 1 on default port 33363 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44971 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@7731d1b0 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72-prefix:jenkins-hbase4.apache.org,43649,1690089076297 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7856e46e-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@31d498d4[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x01fb70e0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/792938361.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1175931123-2575 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1197641724-172.31.14.131-1690089075257:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-630092662_17 at /127.0.0.1:53764 [Receiving block BP-1197641724-172.31.14.131-1690089075257:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 37269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x7f20dc29-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ProcessThread(sid:0 cport:51330): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1197641724-172.31.14.131-1690089075257:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/46717-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1175931123-2569 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1057184077.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/46717.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: 1715237978@qtp-704575498-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37375 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1175931123-2572 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7856e46e-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 38451 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x53254b14-SendThread(127.0.0.1:51330) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=39353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: CacheReplicationMonitor(71002711) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x446486fd sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/792938361.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp709633630-2235 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 2 on default port 38451 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (1073590046) connection to localhost/127.0.0.1:37269 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x222ba5ad sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/792938361.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@7d780e1a java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1175931123-2574 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp724552029-2202 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1057184077.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@3e511021 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@24090403 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-1197641724-172.31.14.131-1690089075257:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44971 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1197641724-172.31.14.131-1690089075257:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/46717.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-630092662_17 at /127.0.0.1:53676 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1175931123-2570-acceptor-0@7697f342-ServerConnector@42e59d1f{HTTP/1.1, (http/1.1)}{0.0.0.0:36455} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1846640454-2269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/46717-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1175931123-2571 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7856e46e-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 38451 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1091847445-2310 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-229a1afd-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-630092662_17 at /127.0.0.1:53786 [Receiving block BP-1197641724-172.31.14.131-1690089075257:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:34969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-556-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-41329130_17 at /127.0.0.1:53752 [Receiving block BP-1197641724-172.31.14.131-1690089075257:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-102660338_17 at /127.0.0.1:49274 [Receiving block BP-1197641724-172.31.14.131-1690089075257:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@11bada21[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 33363 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1197641724-172.31.14.131-1690089075257:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:60906@0x4162bfb9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/792938361.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:39353-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44971 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp709633630-2239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-542-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1197641724-172.31.14.131-1690089075257 heartbeating to localhost/127.0.0.1:37269 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44971 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: BP-1197641724-172.31.14.131-1690089075257 heartbeating to localhost/127.0.0.1:37269 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/46717 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@9c5b3bd java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_677690380_17 at /127.0.0.1:53752 [Receiving block BP-1197641724-172.31.14.131-1690089075257:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:35827 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=34969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x4b5dcbf3-SendThread(127.0.0.1:51330) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690089076819 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: 1899323936@qtp-1283973315-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: 1527843846@qtp-1283973315-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34193 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: BP-1197641724-172.31.14.131-1690089075257 heartbeating to localhost/127.0.0.1:37269 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/46717-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:37269 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x4b5dcbf3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/792938361.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=39353 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1091847445-2308-acceptor-0@b876329-ServerConnector@4fd3b347{HTTP/1.1, (http/1.1)}{0.0.0.0:34819} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690089076819 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@420dd8f6 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:43649-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x222ba5ad-SendThread(127.0.0.1:51330) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Client (1073590046) connection to localhost/127.0.0.1:44369 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@5c289ca7 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1197641724-172.31.14.131-1690089075257:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x4b5dcbf3-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=34969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/dfs/data/data6/current/BP-1197641724-172.31.14.131-1690089075257 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=34969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=34969 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 46717 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:37269 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/46717-SendThread(127.0.0.1:51330) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-630092662_17 at /127.0.0.1:53742 [Receiving block BP-1197641724-172.31.14.131-1690089075257:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1159389665-2294-acceptor-0@6fb9108f-ServerConnector@6a4b4809{HTTP/1.1, (http/1.1)}{0.0.0.0:40545} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server handler 4 on default port 33363 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43649 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x446486fd-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x53254b14-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44971 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x20a3c930-SendThread(127.0.0.1:51330) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server idle connection scanner for port 46717 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS:0;jenkins-hbase4:35827-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1197641724-172.31.14.131-1690089075257:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1073590046) connection to localhost/127.0.0.1:37269 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x01fb70e0-SendThread(127.0.0.1:51330) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp709633630-2238 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44971 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp724552029-2203-acceptor-0@6b8ce84f-ServerConnector@60773ed9{HTTP/1.1, (http/1.1)}{0.0.0.0:42277} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:51330@0x7f20dc29-SendThread(127.0.0.1:51330) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 3 on default port 37269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1175931123-2573 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35827 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1197641724-172.31.14.131-1690089075257:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4cb027d1-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-41329130_17 at /127.0.0.1:49322 [Receiving block BP-1197641724-172.31.14.131-1690089075257:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/dfs/data/data2/current/BP-1197641724-172.31.14.131-1690089075257 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 33363 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-630092662_17 at /127.0.0.1:49332 [Receiving block BP-1197641724-172.31.14.131-1690089075257:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:51330 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: qtp1846640454-2263 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1057184077.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1420101469@qtp-1331529307-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34311 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: pool-545-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 585111834@qtp-837748672-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_677690380_17 at /127.0.0.1:49300 [Receiving block BP-1197641724-172.31.14.131-1690089075257:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@3c3ec0f8 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7856e46e-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=846 (was 808) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=483 (was 515), ProcessCount=179 (was 179), AvailableMemoryMB=6106 (was 6337) 2023-07-23 05:11:18,175 WARN [Listener at localhost/46717] hbase.ResourceChecker(130): Thread=560 is superior to 500 2023-07-23 05:11:18,193 INFO [Listener at localhost/46717] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=560, OpenFileDescriptor=846, MaxFileDescriptor=60000, SystemLoadAverage=483, ProcessCount=179, AvailableMemoryMB=6105 2023-07-23 05:11:18,193 WARN [Listener at localhost/46717] hbase.ResourceChecker(130): Thread=560 is superior to 500 2023-07-23 05:11:18,193 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-23 05:11:18,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:18,197 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:18,197 INFO [RS:3;jenkins-hbase4:39353] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39353%2C1690089077871, suffix=, logDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/WALs/jenkins-hbase4.apache.org,39353,1690089077871, archiveDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/oldWALs, maxLogs=32 2023-07-23 05:11:18,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:18,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:18,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:18,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:11:18,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:18,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:11:18,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:18,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:11:18,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:18,209 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:11:18,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:11:18,213 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:18,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:18,219 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40565,DS-454ed3e1-3fd5-469f-b62e-757e0072e1df,DISK] 2023-07-23 05:11:18,221 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:40503,DS-aec4ff92-dd92-46fd-b8b6-504567634b69,DISK] 2023-07-23 05:11:18,221 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42863,DS-6a1c8e59-5803-4e41-ad5b-ca1c403eb19b,DISK] 2023-07-23 05:11:18,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:18,222 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:18,223 INFO [RS:3;jenkins-hbase4:39353] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/WALs/jenkins-hbase4.apache.org,39353,1690089077871/jenkins-hbase4.apache.org%2C39353%2C1690089077871.1690089078198 2023-07-23 05:11:18,223 DEBUG [RS:3;jenkins-hbase4:39353] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:40565,DS-454ed3e1-3fd5-469f-b62e-757e0072e1df,DISK], DatanodeInfoWithStorage[127.0.0.1:42863,DS-6a1c8e59-5803-4e41-ad5b-ca1c403eb19b,DISK], DatanodeInfoWithStorage[127.0.0.1:40503,DS-aec4ff92-dd92-46fd-b8b6-504567634b69,DISK]] 2023-07-23 05:11:18,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:18,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:18,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34969] to rsgroup master 2023-07-23 05:11:18,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:18,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:48394 deadline: 1690090278227, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. 2023-07-23 05:11:18,227 WARN [Listener at localhost/46717] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:11:18,229 INFO [Listener at localhost/46717] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:18,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:18,230 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:18,231 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35827, jenkins-hbase4.apache.org:39353, jenkins-hbase4.apache.org:43649, jenkins-hbase4.apache.org:44971], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:11:18,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:18,231 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:18,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 05:11:18,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-23 05:11:18,236 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 05:11:18,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-23 05:11:18,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 05:11:18,237 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:18,238 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:18,238 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:18,240 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-23 05:11:18,242 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp/data/default/t1/e8a2a80c8355de4210f17e6d3f6d37fb 2023-07-23 05:11:18,242 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp/data/default/t1/e8a2a80c8355de4210f17e6d3f6d37fb empty. 2023-07-23 05:11:18,243 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp/data/default/t1/e8a2a80c8355de4210f17e6d3f6d37fb 2023-07-23 05:11:18,243 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-23 05:11:18,253 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-23 05:11:18,254 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => e8a2a80c8355de4210f17e6d3f6d37fb, NAME => 't1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp 2023-07-23 05:11:18,262 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:18,262 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing e8a2a80c8355de4210f17e6d3f6d37fb, disabling compactions & flushes 2023-07-23 05:11:18,262 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb. 2023-07-23 05:11:18,262 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb. 2023-07-23 05:11:18,262 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb. after waiting 0 ms 2023-07-23 05:11:18,262 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb. 2023-07-23 05:11:18,262 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb. 2023-07-23 05:11:18,262 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for e8a2a80c8355de4210f17e6d3f6d37fb: 2023-07-23 05:11:18,264 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-23 05:11:18,265 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690089078265"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089078265"}]},"ts":"1690089078265"} 2023-07-23 05:11:18,266 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-23 05:11:18,267 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-23 05:11:18,267 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089078267"}]},"ts":"1690089078267"} 2023-07-23 05:11:18,268 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-23 05:11:18,272 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-23 05:11:18,272 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-23 05:11:18,272 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-23 05:11:18,272 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-23 05:11:18,272 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-23 05:11:18,272 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-23 05:11:18,272 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=e8a2a80c8355de4210f17e6d3f6d37fb, ASSIGN}] 2023-07-23 05:11:18,273 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=e8a2a80c8355de4210f17e6d3f6d37fb, ASSIGN 2023-07-23 05:11:18,273 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=e8a2a80c8355de4210f17e6d3f6d37fb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35827,1690089076137; forceNewPlan=false, retain=false 2023-07-23 05:11:18,319 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-23 05:11:18,340 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 05:11:18,423 INFO [jenkins-hbase4:34969] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-23 05:11:18,424 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e8a2a80c8355de4210f17e6d3f6d37fb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:18,424 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690089078424"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089078424"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089078424"}]},"ts":"1690089078424"} 2023-07-23 05:11:18,426 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure e8a2a80c8355de4210f17e6d3f6d37fb, server=jenkins-hbase4.apache.org,35827,1690089076137}] 2023-07-23 05:11:18,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 05:11:18,578 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:18,578 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-23 05:11:18,580 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:47502, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-23 05:11:18,583 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb. 2023-07-23 05:11:18,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e8a2a80c8355de4210f17e6d3f6d37fb, NAME => 't1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb.', STARTKEY => '', ENDKEY => ''} 2023-07-23 05:11:18,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 e8a2a80c8355de4210f17e6d3f6d37fb 2023-07-23 05:11:18,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-23 05:11:18,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e8a2a80c8355de4210f17e6d3f6d37fb 2023-07-23 05:11:18,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e8a2a80c8355de4210f17e6d3f6d37fb 2023-07-23 05:11:18,585 INFO [StoreOpener-e8a2a80c8355de4210f17e6d3f6d37fb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region e8a2a80c8355de4210f17e6d3f6d37fb 2023-07-23 05:11:18,586 DEBUG [StoreOpener-e8a2a80c8355de4210f17e6d3f6d37fb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/default/t1/e8a2a80c8355de4210f17e6d3f6d37fb/cf1 2023-07-23 05:11:18,586 DEBUG [StoreOpener-e8a2a80c8355de4210f17e6d3f6d37fb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/default/t1/e8a2a80c8355de4210f17e6d3f6d37fb/cf1 2023-07-23 05:11:18,587 INFO [StoreOpener-e8a2a80c8355de4210f17e6d3f6d37fb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e8a2a80c8355de4210f17e6d3f6d37fb columnFamilyName cf1 2023-07-23 05:11:18,587 INFO [StoreOpener-e8a2a80c8355de4210f17e6d3f6d37fb-1] regionserver.HStore(310): Store=e8a2a80c8355de4210f17e6d3f6d37fb/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-23 05:11:18,588 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/default/t1/e8a2a80c8355de4210f17e6d3f6d37fb 2023-07-23 05:11:18,588 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/default/t1/e8a2a80c8355de4210f17e6d3f6d37fb 2023-07-23 05:11:18,590 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e8a2a80c8355de4210f17e6d3f6d37fb 2023-07-23 05:11:18,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/default/t1/e8a2a80c8355de4210f17e6d3f6d37fb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-23 05:11:18,592 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e8a2a80c8355de4210f17e6d3f6d37fb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10361813280, jitterRate=-0.03498093783855438}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-23 05:11:18,592 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e8a2a80c8355de4210f17e6d3f6d37fb: 2023-07-23 05:11:18,593 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb., pid=14, masterSystemTime=1690089078578 2023-07-23 05:11:18,596 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb. 2023-07-23 05:11:18,597 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb. 2023-07-23 05:11:18,597 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=e8a2a80c8355de4210f17e6d3f6d37fb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:18,597 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690089078597"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690089078597"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690089078597"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690089078597"}]},"ts":"1690089078597"} 2023-07-23 05:11:18,599 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-23 05:11:18,600 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure e8a2a80c8355de4210f17e6d3f6d37fb, server=jenkins-hbase4.apache.org,35827,1690089076137 in 172 msec 2023-07-23 05:11:18,602 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-23 05:11:18,602 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=e8a2a80c8355de4210f17e6d3f6d37fb, ASSIGN in 327 msec 2023-07-23 05:11:18,602 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-23 05:11:18,603 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089078602"}]},"ts":"1690089078602"} 2023-07-23 05:11:18,603 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-23 05:11:18,605 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-23 05:11:18,607 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 372 msec 2023-07-23 05:11:18,842 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-23 05:11:18,842 INFO [Listener at localhost/46717] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-23 05:11:18,842 DEBUG [Listener at localhost/46717] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-23 05:11:18,842 INFO [Listener at localhost/46717] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:18,844 INFO [Listener at localhost/46717] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-23 05:11:18,845 INFO [Listener at localhost/46717] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:18,845 INFO [Listener at localhost/46717] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-23 05:11:18,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-23 05:11:18,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-23 05:11:18,849 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-23 05:11:18,849 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-23 05:11:18,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 352 connection: 172.31.14.131:48394 deadline: 1690089138846, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-23 05:11:18,851 INFO [Listener at localhost/46717] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:18,852 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=5 msec 2023-07-23 05:11:18,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:18,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:18,952 INFO [Listener at localhost/46717] client.HBaseAdmin$15(890): Started disable of t1 2023-07-23 05:11:18,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-23 05:11:18,953 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-23 05:11:18,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-23 05:11:18,956 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089078956"}]},"ts":"1690089078956"} 2023-07-23 05:11:18,957 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-23 05:11:18,959 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-23 05:11:18,959 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=e8a2a80c8355de4210f17e6d3f6d37fb, UNASSIGN}] 2023-07-23 05:11:18,960 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=e8a2a80c8355de4210f17e6d3f6d37fb, UNASSIGN 2023-07-23 05:11:18,960 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=e8a2a80c8355de4210f17e6d3f6d37fb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:18,960 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690089078960"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690089078960"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690089078960"}]},"ts":"1690089078960"} 2023-07-23 05:11:18,962 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure e8a2a80c8355de4210f17e6d3f6d37fb, server=jenkins-hbase4.apache.org,35827,1690089076137}] 2023-07-23 05:11:19,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-23 05:11:19,114 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e8a2a80c8355de4210f17e6d3f6d37fb 2023-07-23 05:11:19,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e8a2a80c8355de4210f17e6d3f6d37fb, disabling compactions & flushes 2023-07-23 05:11:19,114 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb. 2023-07-23 05:11:19,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb. 2023-07-23 05:11:19,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb. after waiting 0 ms 2023-07-23 05:11:19,114 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb. 2023-07-23 05:11:19,117 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/default/t1/e8a2a80c8355de4210f17e6d3f6d37fb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-23 05:11:19,118 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb. 2023-07-23 05:11:19,118 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e8a2a80c8355de4210f17e6d3f6d37fb: 2023-07-23 05:11:19,119 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e8a2a80c8355de4210f17e6d3f6d37fb 2023-07-23 05:11:19,120 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=e8a2a80c8355de4210f17e6d3f6d37fb, regionState=CLOSED 2023-07-23 05:11:19,120 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690089079120"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690089079120"}]},"ts":"1690089079120"} 2023-07-23 05:11:19,122 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-23 05:11:19,122 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure e8a2a80c8355de4210f17e6d3f6d37fb, server=jenkins-hbase4.apache.org,35827,1690089076137 in 159 msec 2023-07-23 05:11:19,123 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-23 05:11:19,123 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=e8a2a80c8355de4210f17e6d3f6d37fb, UNASSIGN in 163 msec 2023-07-23 05:11:19,124 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690089079124"}]},"ts":"1690089079124"} 2023-07-23 05:11:19,125 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-23 05:11:19,127 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-23 05:11:19,129 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 175 msec 2023-07-23 05:11:19,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-23 05:11:19,258 INFO [Listener at localhost/46717] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-23 05:11:19,259 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-23 05:11:19,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-23 05:11:19,261 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-23 05:11:19,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-23 05:11:19,262 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-23 05:11:19,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:19,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:19,264 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:19,266 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp/data/default/t1/e8a2a80c8355de4210f17e6d3f6d37fb 2023-07-23 05:11:19,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-23 05:11:19,267 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp/data/default/t1/e8a2a80c8355de4210f17e6d3f6d37fb/cf1, FileablePath, hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp/data/default/t1/e8a2a80c8355de4210f17e6d3f6d37fb/recovered.edits] 2023-07-23 05:11:19,272 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp/data/default/t1/e8a2a80c8355de4210f17e6d3f6d37fb/recovered.edits/4.seqid to hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/archive/data/default/t1/e8a2a80c8355de4210f17e6d3f6d37fb/recovered.edits/4.seqid 2023-07-23 05:11:19,272 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/.tmp/data/default/t1/e8a2a80c8355de4210f17e6d3f6d37fb 2023-07-23 05:11:19,272 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-23 05:11:19,275 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-23 05:11:19,276 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-23 05:11:19,278 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-23 05:11:19,279 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-23 05:11:19,279 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-23 05:11:19,279 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690089079279"}]},"ts":"9223372036854775807"} 2023-07-23 05:11:19,280 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-23 05:11:19,281 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => e8a2a80c8355de4210f17e6d3f6d37fb, NAME => 't1,,1690089078233.e8a2a80c8355de4210f17e6d3f6d37fb.', STARTKEY => '', ENDKEY => ''}] 2023-07-23 05:11:19,281 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-23 05:11:19,281 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690089079281"}]},"ts":"9223372036854775807"} 2023-07-23 05:11:19,282 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-23 05:11:19,284 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-23 05:11:19,285 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 25 msec 2023-07-23 05:11:19,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-23 05:11:19,368 INFO [Listener at localhost/46717] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-23 05:11:19,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:19,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:19,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:19,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:11:19,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:19,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:11:19,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:19,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:11:19,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:19,385 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:11:19,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:11:19,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:19,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:19,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:19,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:19,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34969] to rsgroup master 2023-07-23 05:11:19,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:19,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:48394 deadline: 1690090279395, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. 2023-07-23 05:11:19,395 WARN [Listener at localhost/46717] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:11:19,399 INFO [Listener at localhost/46717] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:19,399 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,400 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35827, jenkins-hbase4.apache.org:39353, jenkins-hbase4.apache.org:43649, jenkins-hbase4.apache.org:44971], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:11:19,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:19,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:19,419 INFO [Listener at localhost/46717] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=570 (was 560) - Thread LEAK? -, OpenFileDescriptor=854 (was 846) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=483 (was 483), ProcessCount=179 (was 179), AvailableMemoryMB=6099 (was 6105) 2023-07-23 05:11:19,419 WARN [Listener at localhost/46717] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-23 05:11:19,438 INFO [Listener at localhost/46717] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=570, OpenFileDescriptor=854, MaxFileDescriptor=60000, SystemLoadAverage=483, ProcessCount=179, AvailableMemoryMB=6093 2023-07-23 05:11:19,438 WARN [Listener at localhost/46717] hbase.ResourceChecker(130): Thread=570 is superior to 500 2023-07-23 05:11:19,438 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-23 05:11:19,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:19,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:19,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:19,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:11:19,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:19,444 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:11:19,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:19,447 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:11:19,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:19,451 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:11:19,451 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:11:19,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:19,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:19,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:19,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:19,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34969] to rsgroup master 2023-07-23 05:11:19,460 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:19,461 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:48394 deadline: 1690090279460, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. 2023-07-23 05:11:19,461 WARN [Listener at localhost/46717] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:11:19,464 INFO [Listener at localhost/46717] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:19,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,465 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35827, jenkins-hbase4.apache.org:39353, jenkins-hbase4.apache.org:43649, jenkins-hbase4.apache.org:44971], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:11:19,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:19,465 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:19,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-23 05:11:19,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 05:11:19,467 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-23 05:11:19,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-23 05:11:19,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-23 05:11:19,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:19,476 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:19,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:19,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:11:19,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:19,477 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:11:19,479 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:19,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:11:19,481 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:19,483 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:11:19,484 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:11:19,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:19,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:19,487 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:19,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:19,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,491 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34969] to rsgroup master 2023-07-23 05:11:19,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:19,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:48394 deadline: 1690090279493, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. 2023-07-23 05:11:19,494 WARN [Listener at localhost/46717] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:11:19,496 INFO [Listener at localhost/46717] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:19,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,497 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,497 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35827, jenkins-hbase4.apache.org:39353, jenkins-hbase4.apache.org:43649, jenkins-hbase4.apache.org:44971], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:11:19,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:19,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:19,518 INFO [Listener at localhost/46717] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=572 (was 570) - Thread LEAK? -, OpenFileDescriptor=854 (was 854), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=483 (was 483), ProcessCount=179 (was 179), AvailableMemoryMB=6092 (was 6093) 2023-07-23 05:11:19,519 WARN [Listener at localhost/46717] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-23 05:11:19,539 INFO [Listener at localhost/46717] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=572, OpenFileDescriptor=854, MaxFileDescriptor=60000, SystemLoadAverage=483, ProcessCount=179, AvailableMemoryMB=6090 2023-07-23 05:11:19,539 WARN [Listener at localhost/46717] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-23 05:11:19,540 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-23 05:11:19,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:19,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:19,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:19,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:11:19,546 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:19,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:11:19,550 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:19,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:11:19,552 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:19,555 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:11:19,555 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:11:19,557 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:19,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:19,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:19,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:19,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34969] to rsgroup master 2023-07-23 05:11:19,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:19,565 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:48394 deadline: 1690090279564, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. 2023-07-23 05:11:19,565 WARN [Listener at localhost/46717] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:11:19,567 INFO [Listener at localhost/46717] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:19,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,568 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35827, jenkins-hbase4.apache.org:39353, jenkins-hbase4.apache.org:43649, jenkins-hbase4.apache.org:44971], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:11:19,568 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:19,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:19,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,572 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:19,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:19,573 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:19,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:11:19,574 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:19,575 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:11:19,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:19,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:11:19,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:19,587 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:11:19,588 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:11:19,590 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:19,591 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:19,592 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:19,593 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:19,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,595 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34969] to rsgroup master 2023-07-23 05:11:19,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:19,598 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:48394 deadline: 1690090279597, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. 2023-07-23 05:11:19,598 WARN [Listener at localhost/46717] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:11:19,600 INFO [Listener at localhost/46717] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:19,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,601 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,601 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35827, jenkins-hbase4.apache.org:39353, jenkins-hbase4.apache.org:43649, jenkins-hbase4.apache.org:44971], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:11:19,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:19,602 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:19,621 INFO [Listener at localhost/46717] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=573 (was 572) - Thread LEAK? -, OpenFileDescriptor=854 (was 854), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=483 (was 483), ProcessCount=179 (was 179), AvailableMemoryMB=6087 (was 6090) 2023-07-23 05:11:19,621 WARN [Listener at localhost/46717] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-23 05:11:19,640 INFO [Listener at localhost/46717] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=573, OpenFileDescriptor=854, MaxFileDescriptor=60000, SystemLoadAverage=483, ProcessCount=179, AvailableMemoryMB=6086 2023-07-23 05:11:19,640 WARN [Listener at localhost/46717] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-23 05:11:19,640 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-23 05:11:19,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,645 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:19,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:19,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:19,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:11:19,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:19,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:11:19,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:19,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:11:19,655 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:19,658 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:11:19,659 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:11:19,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:19,661 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:19,664 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:19,665 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:19,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,667 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34969] to rsgroup master 2023-07-23 05:11:19,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:19,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:48394 deadline: 1690090279671, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. 2023-07-23 05:11:19,672 WARN [Listener at localhost/46717] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:11:19,674 INFO [Listener at localhost/46717] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:19,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,675 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35827, jenkins-hbase4.apache.org:39353, jenkins-hbase4.apache.org:43649, jenkins-hbase4.apache.org:44971], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:11:19,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:19,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:19,676 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-23 05:11:19,676 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-23 05:11:19,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-23 05:11:19,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:19,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:19,680 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-23 05:11:19,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:19,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-23 05:11:19,686 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-23 05:11:19,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-23 05:11:19,694 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 05:11:19,696 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 9 msec 2023-07-23 05:11:19,791 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-23 05:11:19,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-23 05:11:19,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:19,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:48394 deadline: 1690090279792, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-23 05:11:19,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-23 05:11:19,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-23 05:11:19,813 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-23 05:11:19,815 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-23 05:11:19,826 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 15 msec 2023-07-23 05:11:19,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-23 05:11:19,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-23 05:11:19,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-23 05:11:19,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:19,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-23 05:11:19,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:19,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-23 05:11:19,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:19,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:19,928 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:19,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-23 05:11:19,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 05:11:19,933 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 05:11:19,935 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 05:11:19,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-23 05:11:19,937 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 05:11:19,939 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-23 05:11:19,939 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-23 05:11:19,939 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 05:11:19,941 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-23 05:11:19,942 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 11 msec 2023-07-23 05:11:20,036 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-23 05:11:20,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-23 05:11:20,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-23 05:11:20,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:20,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:20,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-23 05:11:20,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:20,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:20,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:20,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:20,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:48394 deadline: 1690089140048, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-23 05:11:20,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:20,052 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:20,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:20,053 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:20,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:20,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:11:20,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:20,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-23 05:11:20,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:20,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:20,057 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-23 05:11:20,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:20,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-23 05:11:20,059 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-23 05:11:20,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-23 05:11:20,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-23 05:11:20,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-23 05:11:20,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-23 05:11:20,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:20,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-23 05:11:20,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-23 05:11:20,066 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-23 05:11:20,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-23 05:11:20,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-23 05:11:20,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-23 05:11:20,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-23 05:11:20,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-23 05:11:20,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:20,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:20,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:34969] to rsgroup master 2023-07-23 05:11:20,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-23 05:11:20,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:48394 deadline: 1690090280075, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. 2023-07-23 05:11:20,076 WARN [Listener at localhost/46717] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:34969 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-23 05:11:20,078 INFO [Listener at localhost/46717] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-23 05:11:20,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-23 05:11:20,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-23 05:11:20,079 INFO [Listener at localhost/46717] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35827, jenkins-hbase4.apache.org:39353, jenkins-hbase4.apache.org:43649, jenkins-hbase4.apache.org:44971], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-23 05:11:20,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-23 05:11:20,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=34969] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-23 05:11:20,098 INFO [Listener at localhost/46717] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=573 (was 573), OpenFileDescriptor=848 (was 854), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=483 (was 483), ProcessCount=179 (was 179), AvailableMemoryMB=6069 (was 6086) 2023-07-23 05:11:20,098 WARN [Listener at localhost/46717] hbase.ResourceChecker(130): Thread=573 is superior to 500 2023-07-23 05:11:20,098 INFO [Listener at localhost/46717] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-23 05:11:20,099 INFO [Listener at localhost/46717] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-23 05:11:20,099 DEBUG [Listener at localhost/46717] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x01fb70e0 to 127.0.0.1:51330 2023-07-23 05:11:20,099 DEBUG [Listener at localhost/46717] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:20,099 DEBUG [Listener at localhost/46717] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-23 05:11:20,099 DEBUG [Listener at localhost/46717] util.JVMClusterUtil(257): Found active master hash=1135274744, stopped=false 2023-07-23 05:11:20,099 DEBUG [Listener at localhost/46717] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-23 05:11:20,099 DEBUG [Listener at localhost/46717] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-23 05:11:20,099 INFO [Listener at localhost/46717] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,34969,1690089075958 2023-07-23 05:11:20,101 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:20,101 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:20,101 INFO [Listener at localhost/46717] procedure2.ProcedureExecutor(629): Stopping 2023-07-23 05:11:20,101 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:20,101 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:39353-0x101909737dc000b, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:20,101 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-23 05:11:20,101 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:20,101 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:20,101 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:20,102 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:20,102 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39353-0x101909737dc000b, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:20,102 DEBUG [Listener at localhost/46717] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x446486fd to 127.0.0.1:51330 2023-07-23 05:11:20,102 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-23 05:11:20,102 DEBUG [Listener at localhost/46717] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:20,102 INFO [Listener at localhost/46717] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35827,1690089076137' ***** 2023-07-23 05:11:20,102 INFO [Listener at localhost/46717] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 05:11:20,102 INFO [RS:0;jenkins-hbase4:35827] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 05:11:20,102 INFO [Listener at localhost/46717] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43649,1690089076297' ***** 2023-07-23 05:11:20,103 INFO [Listener at localhost/46717] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 05:11:20,103 INFO [Listener at localhost/46717] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44971,1690089076453' ***** 2023-07-23 05:11:20,103 INFO [Listener at localhost/46717] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 05:11:20,103 INFO [RS:1;jenkins-hbase4:43649] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 05:11:20,104 INFO [Listener at localhost/46717] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39353,1690089077871' ***** 2023-07-23 05:11:20,104 INFO [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 05:11:20,104 INFO [Listener at localhost/46717] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-23 05:11:20,105 INFO [RS:3;jenkins-hbase4:39353] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 05:11:20,109 INFO [RS:1;jenkins-hbase4:43649] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@70ea8f0d{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:11:20,109 INFO [RS:2;jenkins-hbase4:44971] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@61508a33{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:11:20,109 INFO [RS:0;jenkins-hbase4:35827] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@3a3e072e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:11:20,109 INFO [RS:3;jenkins-hbase4:39353] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@15e635d9{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-23 05:11:20,110 INFO [RS:1;jenkins-hbase4:43649] server.AbstractConnector(383): Stopped ServerConnector@60a59e1f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 05:11:20,110 INFO [RS:0;jenkins-hbase4:35827] server.AbstractConnector(383): Stopped ServerConnector@70813e9d{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 05:11:20,110 INFO [RS:2;jenkins-hbase4:44971] server.AbstractConnector(383): Stopped ServerConnector@6a4b4809{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 05:11:20,110 INFO [RS:0;jenkins-hbase4:35827] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 05:11:20,110 INFO [RS:3;jenkins-hbase4:39353] server.AbstractConnector(383): Stopped ServerConnector@42e59d1f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 05:11:20,110 INFO [RS:1;jenkins-hbase4:43649] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 05:11:20,110 INFO [RS:3;jenkins-hbase4:39353] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 05:11:20,110 INFO [RS:2;jenkins-hbase4:44971] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 05:11:20,111 INFO [RS:0;jenkins-hbase4:35827] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2f89cebb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 05:11:20,111 INFO [RS:1;jenkins-hbase4:43649] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6dc816f5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 05:11:20,111 INFO [RS:3;jenkins-hbase4:39353] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7f41fb{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 05:11:20,111 INFO [RS:2;jenkins-hbase4:44971] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6fc3bb30{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 05:11:20,113 INFO [RS:3;jenkins-hbase4:39353] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@672c0920{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/hadoop.log.dir/,STOPPED} 2023-07-23 05:11:20,113 INFO [RS:1;jenkins-hbase4:43649] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@487d57f8{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/hadoop.log.dir/,STOPPED} 2023-07-23 05:11:20,112 INFO [RS:0;jenkins-hbase4:35827] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@52e37a10{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/hadoop.log.dir/,STOPPED} 2023-07-23 05:11:20,112 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-23 05:11:20,114 INFO [RS:3;jenkins-hbase4:39353] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 05:11:20,114 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-23 05:11:20,114 INFO [RS:1;jenkins-hbase4:43649] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 05:11:20,114 INFO [RS:2;jenkins-hbase4:44971] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@55105c90{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/hadoop.log.dir/,STOPPED} 2023-07-23 05:11:20,115 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 05:11:20,115 INFO [RS:1;jenkins-hbase4:43649] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 05:11:20,114 INFO [RS:3;jenkins-hbase4:39353] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 05:11:20,114 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 05:11:20,115 INFO [RS:3;jenkins-hbase4:39353] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 05:11:20,115 INFO [RS:3;jenkins-hbase4:39353] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39353,1690089077871 2023-07-23 05:11:20,115 DEBUG [RS:3;jenkins-hbase4:39353] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x20a3c930 to 127.0.0.1:51330 2023-07-23 05:11:20,115 DEBUG [RS:3;jenkins-hbase4:39353] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:20,115 INFO [RS:3;jenkins-hbase4:39353] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39353,1690089077871; all regions closed. 2023-07-23 05:11:20,115 INFO [RS:0;jenkins-hbase4:35827] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 05:11:20,115 INFO [RS:0;jenkins-hbase4:35827] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 05:11:20,115 INFO [RS:0;jenkins-hbase4:35827] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 05:11:20,115 INFO [RS:0;jenkins-hbase4:35827] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:20,115 INFO [RS:1;jenkins-hbase4:43649] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 05:11:20,115 DEBUG [RS:0;jenkins-hbase4:35827] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4b5dcbf3 to 127.0.0.1:51330 2023-07-23 05:11:20,115 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 05:11:20,115 INFO [RS:2;jenkins-hbase4:44971] regionserver.HeapMemoryManager(220): Stopping 2023-07-23 05:11:20,116 DEBUG [RS:0;jenkins-hbase4:35827] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:20,115 INFO [RS:1;jenkins-hbase4:43649] regionserver.HRegionServer(3305): Received CLOSE for 556139edd0141ac2f7d66b7c7bb9ba5f 2023-07-23 05:11:20,116 INFO [RS:0;jenkins-hbase4:35827] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35827,1690089076137; all regions closed. 2023-07-23 05:11:20,122 INFO [RS:1;jenkins-hbase4:43649] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43649,1690089076297 2023-07-23 05:11:20,122 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-23 05:11:20,122 DEBUG [RS:1;jenkins-hbase4:43649] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x222ba5ad to 127.0.0.1:51330 2023-07-23 05:11:20,122 INFO [RS:2;jenkins-hbase4:44971] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-23 05:11:20,122 INFO [RS:2;jenkins-hbase4:44971] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-23 05:11:20,122 DEBUG [RS:1;jenkins-hbase4:43649] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:20,122 INFO [RS:1;jenkins-hbase4:43649] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-23 05:11:20,122 DEBUG [RS:1;jenkins-hbase4:43649] regionserver.HRegionServer(1478): Online Regions={556139edd0141ac2f7d66b7c7bb9ba5f=hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f.} 2023-07-23 05:11:20,122 INFO [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(3305): Received CLOSE for 6bca6539cf64287f2b8b50b4d00a25f5 2023-07-23 05:11:20,122 DEBUG [RS:1;jenkins-hbase4:43649] regionserver.HRegionServer(1504): Waiting on 556139edd0141ac2f7d66b7c7bb9ba5f 2023-07-23 05:11:20,122 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 556139edd0141ac2f7d66b7c7bb9ba5f, disabling compactions & flushes 2023-07-23 05:11:20,122 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f. 2023-07-23 05:11:20,122 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f. 2023-07-23 05:11:20,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f. after waiting 0 ms 2023-07-23 05:11:20,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f. 2023-07-23 05:11:20,123 INFO [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44971,1690089076453 2023-07-23 05:11:20,123 DEBUG [RS:2;jenkins-hbase4:44971] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7f20dc29 to 127.0.0.1:51330 2023-07-23 05:11:20,123 DEBUG [RS:2;jenkins-hbase4:44971] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:20,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 6bca6539cf64287f2b8b50b4d00a25f5, disabling compactions & flushes 2023-07-23 05:11:20,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 556139edd0141ac2f7d66b7c7bb9ba5f 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-23 05:11:20,123 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5. 2023-07-23 05:11:20,123 INFO [RS:2;jenkins-hbase4:44971] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 05:11:20,123 INFO [RS:2;jenkins-hbase4:44971] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 05:11:20,123 INFO [RS:2;jenkins-hbase4:44971] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 05:11:20,123 INFO [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-23 05:11:20,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5. 2023-07-23 05:11:20,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5. after waiting 0 ms 2023-07-23 05:11:20,123 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5. 2023-07-23 05:11:20,124 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 6bca6539cf64287f2b8b50b4d00a25f5 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-23 05:11:20,126 INFO [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-23 05:11:20,126 DEBUG [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(1478): Online Regions={6bca6539cf64287f2b8b50b4d00a25f5=hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5., 1588230740=hbase:meta,,1.1588230740} 2023-07-23 05:11:20,127 DEBUG [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(1504): Waiting on 1588230740, 6bca6539cf64287f2b8b50b4d00a25f5 2023-07-23 05:11:20,127 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-23 05:11:20,127 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-23 05:11:20,127 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-23 05:11:20,127 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-23 05:11:20,127 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-23 05:11:20,127 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-23 05:11:20,133 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:20,138 DEBUG [RS:3;jenkins-hbase4:39353] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/oldWALs 2023-07-23 05:11:20,138 INFO [RS:3;jenkins-hbase4:39353] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39353%2C1690089077871:(num 1690089078198) 2023-07-23 05:11:20,138 DEBUG [RS:3;jenkins-hbase4:39353] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:20,138 INFO [RS:3;jenkins-hbase4:39353] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:20,139 DEBUG [RS:0;jenkins-hbase4:35827] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/oldWALs 2023-07-23 05:11:20,139 INFO [RS:0;jenkins-hbase4:35827] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35827%2C1690089076137:(num 1690089077103) 2023-07-23 05:11:20,139 DEBUG [RS:0;jenkins-hbase4:35827] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:20,139 INFO [RS:0;jenkins-hbase4:35827] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:20,143 INFO [RS:0;jenkins-hbase4:35827] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-23 05:11:20,143 INFO [RS:0;jenkins-hbase4:35827] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 05:11:20,143 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 05:11:20,143 INFO [RS:0;jenkins-hbase4:35827] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 05:11:20,143 INFO [RS:0;jenkins-hbase4:35827] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 05:11:20,144 INFO [RS:0;jenkins-hbase4:35827] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35827 2023-07-23 05:11:20,145 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:20,148 INFO [RS:3;jenkins-hbase4:39353] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 05:11:20,148 INFO [RS:3;jenkins-hbase4:39353] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 05:11:20,148 INFO [RS:3;jenkins-hbase4:39353] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 05:11:20,148 INFO [RS:3;jenkins-hbase4:39353] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 05:11:20,148 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 05:11:20,151 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:20,151 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:20,151 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:20,151 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:39353-0x101909737dc000b, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:20,151 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:39353-0x101909737dc000b, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:20,151 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:20,151 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:20,151 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35827,1690089076137 2023-07-23 05:11:20,151 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:20,151 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35827,1690089076137] 2023-07-23 05:11:20,151 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35827,1690089076137; numProcessing=1 2023-07-23 05:11:20,152 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:20,153 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35827,1690089076137 already deleted, retry=false 2023-07-23 05:11:20,153 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35827,1690089076137 expired; onlineServers=3 2023-07-23 05:11:20,160 INFO [RS:3;jenkins-hbase4:39353] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39353 2023-07-23 05:11:20,164 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39353,1690089077871 2023-07-23 05:11:20,164 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:20,164 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:39353-0x101909737dc000b, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39353,1690089077871 2023-07-23 05:11:20,164 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39353,1690089077871 2023-07-23 05:11:20,167 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39353,1690089077871 2023-07-23 05:11:20,167 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39353,1690089077871] 2023-07-23 05:11:20,167 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39353,1690089077871; numProcessing=2 2023-07-23 05:11:20,169 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39353,1690089077871 already deleted, retry=false 2023-07-23 05:11:20,169 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39353,1690089077871 expired; onlineServers=2 2023-07-23 05:11:20,169 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:20,191 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/.tmp/info/1e31d3e7e73d4634a6626b6163d4c645 2023-07-23 05:11:20,196 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/namespace/556139edd0141ac2f7d66b7c7bb9ba5f/.tmp/info/444a1a344f4b46b7a79b7a281d8ad645 2023-07-23 05:11:20,200 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1e31d3e7e73d4634a6626b6163d4c645 2023-07-23 05:11:20,203 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 444a1a344f4b46b7a79b7a281d8ad645 2023-07-23 05:11:20,204 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/namespace/556139edd0141ac2f7d66b7c7bb9ba5f/.tmp/info/444a1a344f4b46b7a79b7a281d8ad645 as hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/namespace/556139edd0141ac2f7d66b7c7bb9ba5f/info/444a1a344f4b46b7a79b7a281d8ad645 2023-07-23 05:11:20,212 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 444a1a344f4b46b7a79b7a281d8ad645 2023-07-23 05:11:20,212 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/namespace/556139edd0141ac2f7d66b7c7bb9ba5f/info/444a1a344f4b46b7a79b7a281d8ad645, entries=3, sequenceid=9, filesize=5.0 K 2023-07-23 05:11:20,213 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 556139edd0141ac2f7d66b7c7bb9ba5f in 90ms, sequenceid=9, compaction requested=false 2023-07-23 05:11:20,219 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/rsgroup/6bca6539cf64287f2b8b50b4d00a25f5/.tmp/m/8f332336ac984865a8defce8a8d0b1be 2023-07-23 05:11:20,232 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8f332336ac984865a8defce8a8d0b1be 2023-07-23 05:11:20,234 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/rsgroup/6bca6539cf64287f2b8b50b4d00a25f5/.tmp/m/8f332336ac984865a8defce8a8d0b1be as hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/rsgroup/6bca6539cf64287f2b8b50b4d00a25f5/m/8f332336ac984865a8defce8a8d0b1be 2023-07-23 05:11:20,234 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/.tmp/rep_barrier/ce40ad976cc94518b0e1d1a6b2a63604 2023-07-23 05:11:20,239 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ce40ad976cc94518b0e1d1a6b2a63604 2023-07-23 05:11:20,243 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/namespace/556139edd0141ac2f7d66b7c7bb9ba5f/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-23 05:11:20,245 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f. 2023-07-23 05:11:20,245 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 556139edd0141ac2f7d66b7c7bb9ba5f: 2023-07-23 05:11:20,245 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690089077348.556139edd0141ac2f7d66b7c7bb9ba5f. 2023-07-23 05:11:20,250 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8f332336ac984865a8defce8a8d0b1be 2023-07-23 05:11:20,251 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/rsgroup/6bca6539cf64287f2b8b50b4d00a25f5/m/8f332336ac984865a8defce8a8d0b1be, entries=12, sequenceid=29, filesize=5.4 K 2023-07-23 05:11:20,252 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 6bca6539cf64287f2b8b50b4d00a25f5 in 129ms, sequenceid=29, compaction requested=false 2023-07-23 05:11:20,287 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/rsgroup/6bca6539cf64287f2b8b50b4d00a25f5/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-23 05:11:20,287 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 05:11:20,289 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5. 2023-07-23 05:11:20,289 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 6bca6539cf64287f2b8b50b4d00a25f5: 2023-07-23 05:11:20,290 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690089077394.6bca6539cf64287f2b8b50b4d00a25f5. 2023-07-23 05:11:20,301 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:39353-0x101909737dc000b, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:20,301 INFO [RS:3;jenkins-hbase4:39353] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39353,1690089077871; zookeeper connection closed. 2023-07-23 05:11:20,301 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:39353-0x101909737dc000b, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:20,301 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6839d6c8] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6839d6c8 2023-07-23 05:11:20,322 INFO [RS:1;jenkins-hbase4:43649] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43649,1690089076297; all regions closed. 2023-07-23 05:11:20,327 DEBUG [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-23 05:11:20,327 DEBUG [RS:1;jenkins-hbase4:43649] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/oldWALs 2023-07-23 05:11:20,327 INFO [RS:1;jenkins-hbase4:43649] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43649%2C1690089076297:(num 1690089077079) 2023-07-23 05:11:20,327 DEBUG [RS:1;jenkins-hbase4:43649] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:20,327 INFO [RS:1;jenkins-hbase4:43649] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:20,328 INFO [RS:1;jenkins-hbase4:43649] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 05:11:20,328 INFO [RS:1;jenkins-hbase4:43649] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-23 05:11:20,328 INFO [RS:1;jenkins-hbase4:43649] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-23 05:11:20,328 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 05:11:20,328 INFO [RS:1;jenkins-hbase4:43649] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-23 05:11:20,329 INFO [RS:1;jenkins-hbase4:43649] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43649 2023-07-23 05:11:20,331 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43649,1690089076297 2023-07-23 05:11:20,331 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43649,1690089076297 2023-07-23 05:11:20,331 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:20,332 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43649,1690089076297] 2023-07-23 05:11:20,332 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43649,1690089076297; numProcessing=3 2023-07-23 05:11:20,333 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43649,1690089076297 already deleted, retry=false 2023-07-23 05:11:20,333 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43649,1690089076297 expired; onlineServers=1 2023-07-23 05:11:20,401 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:20,401 INFO [RS:0;jenkins-hbase4:35827] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35827,1690089076137; zookeeper connection closed. 2023-07-23 05:11:20,401 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:35827-0x101909737dc0001, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:20,401 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6c021791] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6c021791 2023-07-23 05:11:20,527 DEBUG [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-23 05:11:20,691 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/.tmp/table/d66a746913e64e68a4cd78e640476721 2023-07-23 05:11:20,698 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d66a746913e64e68a4cd78e640476721 2023-07-23 05:11:20,699 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/.tmp/info/1e31d3e7e73d4634a6626b6163d4c645 as hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/info/1e31d3e7e73d4634a6626b6163d4c645 2023-07-23 05:11:20,707 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 1e31d3e7e73d4634a6626b6163d4c645 2023-07-23 05:11:20,707 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/info/1e31d3e7e73d4634a6626b6163d4c645, entries=22, sequenceid=26, filesize=7.3 K 2023-07-23 05:11:20,708 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/.tmp/rep_barrier/ce40ad976cc94518b0e1d1a6b2a63604 as hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/rep_barrier/ce40ad976cc94518b0e1d1a6b2a63604 2023-07-23 05:11:20,716 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ce40ad976cc94518b0e1d1a6b2a63604 2023-07-23 05:11:20,717 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/rep_barrier/ce40ad976cc94518b0e1d1a6b2a63604, entries=1, sequenceid=26, filesize=4.9 K 2023-07-23 05:11:20,718 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/.tmp/table/d66a746913e64e68a4cd78e640476721 as hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/table/d66a746913e64e68a4cd78e640476721 2023-07-23 05:11:20,727 DEBUG [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-23 05:11:20,736 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for d66a746913e64e68a4cd78e640476721 2023-07-23 05:11:20,736 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/table/d66a746913e64e68a4cd78e640476721, entries=6, sequenceid=26, filesize=5.1 K 2023-07-23 05:11:20,738 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 611ms, sequenceid=26, compaction requested=false 2023-07-23 05:11:20,759 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-23 05:11:20,760 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-23 05:11:20,761 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-23 05:11:20,761 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-23 05:11:20,761 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-23 05:11:20,927 INFO [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44971,1690089076453; all regions closed. 2023-07-23 05:11:20,935 DEBUG [RS:2;jenkins-hbase4:44971] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/oldWALs 2023-07-23 05:11:20,935 INFO [RS:2;jenkins-hbase4:44971] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44971%2C1690089076453.meta:.meta(num 1690089077285) 2023-07-23 05:11:20,939 DEBUG [RS:2;jenkins-hbase4:44971] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/oldWALs 2023-07-23 05:11:20,939 INFO [RS:2;jenkins-hbase4:44971] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44971%2C1690089076453:(num 1690089077103) 2023-07-23 05:11:20,939 DEBUG [RS:2;jenkins-hbase4:44971] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:20,939 INFO [RS:2;jenkins-hbase4:44971] regionserver.LeaseManager(133): Closed leases 2023-07-23 05:11:20,940 INFO [RS:2;jenkins-hbase4:44971] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-23 05:11:20,940 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 05:11:20,941 INFO [RS:2;jenkins-hbase4:44971] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44971 2023-07-23 05:11:20,943 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-23 05:11:20,943 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44971,1690089076453 2023-07-23 05:11:20,944 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44971,1690089076453] 2023-07-23 05:11:20,944 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44971,1690089076453; numProcessing=4 2023-07-23 05:11:20,945 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44971,1690089076453 already deleted, retry=false 2023-07-23 05:11:20,945 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44971,1690089076453 expired; onlineServers=0 2023-07-23 05:11:20,945 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34969,1690089075958' ***** 2023-07-23 05:11:20,945 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-23 05:11:20,947 DEBUG [M:0;jenkins-hbase4:34969] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@47956655, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-23 05:11:20,947 INFO [M:0;jenkins-hbase4:34969] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-23 05:11:20,949 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-23 05:11:20,949 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-23 05:11:20,950 INFO [M:0;jenkins-hbase4:34969] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1ec2cbd0{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-23 05:11:20,950 INFO [M:0;jenkins-hbase4:34969] server.AbstractConnector(383): Stopped ServerConnector@60773ed9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 05:11:20,950 INFO [M:0;jenkins-hbase4:34969] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-23 05:11:20,951 INFO [M:0;jenkins-hbase4:34969] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@41ffc7bc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-23 05:11:20,951 INFO [M:0;jenkins-hbase4:34969] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@338f5eef{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/hadoop.log.dir/,STOPPED} 2023-07-23 05:11:20,952 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-23 05:11:20,952 INFO [M:0;jenkins-hbase4:34969] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34969,1690089075958 2023-07-23 05:11:20,953 INFO [M:0;jenkins-hbase4:34969] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34969,1690089075958; all regions closed. 2023-07-23 05:11:20,953 DEBUG [M:0;jenkins-hbase4:34969] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-23 05:11:20,953 INFO [M:0;jenkins-hbase4:34969] master.HMaster(1491): Stopping master jetty server 2023-07-23 05:11:20,953 INFO [M:0;jenkins-hbase4:34969] server.AbstractConnector(383): Stopped ServerConnector@4fd3b347{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-23 05:11:20,954 DEBUG [M:0;jenkins-hbase4:34969] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-23 05:11:20,954 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-23 05:11:20,954 DEBUG [M:0;jenkins-hbase4:34969] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-23 05:11:20,954 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690089076819] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690089076819,5,FailOnTimeoutGroup] 2023-07-23 05:11:20,954 INFO [M:0;jenkins-hbase4:34969] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-23 05:11:20,954 INFO [M:0;jenkins-hbase4:34969] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-23 05:11:20,954 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690089076819] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690089076819,5,FailOnTimeoutGroup] 2023-07-23 05:11:20,954 INFO [M:0;jenkins-hbase4:34969] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-23 05:11:20,955 DEBUG [M:0;jenkins-hbase4:34969] master.HMaster(1512): Stopping service threads 2023-07-23 05:11:20,955 INFO [M:0;jenkins-hbase4:34969] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-23 05:11:20,955 ERROR [M:0;jenkins-hbase4:34969] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-23 05:11:20,955 INFO [M:0;jenkins-hbase4:34969] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-23 05:11:20,955 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-23 05:11:20,955 DEBUG [M:0;jenkins-hbase4:34969] zookeeper.ZKUtil(398): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-23 05:11:20,955 WARN [M:0;jenkins-hbase4:34969] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-23 05:11:20,955 INFO [M:0;jenkins-hbase4:34969] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-23 05:11:20,956 INFO [M:0;jenkins-hbase4:34969] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-23 05:11:20,956 DEBUG [M:0;jenkins-hbase4:34969] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-23 05:11:20,956 INFO [M:0;jenkins-hbase4:34969] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:20,956 DEBUG [M:0;jenkins-hbase4:34969] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:20,956 DEBUG [M:0;jenkins-hbase4:34969] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-23 05:11:20,956 DEBUG [M:0;jenkins-hbase4:34969] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:20,956 INFO [M:0;jenkins-hbase4:34969] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.18 KB heapSize=90.62 KB 2023-07-23 05:11:20,969 INFO [M:0;jenkins-hbase4:34969] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.18 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/604782845eb849318e2b89eaa364a8e5 2023-07-23 05:11:20,975 DEBUG [M:0;jenkins-hbase4:34969] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/604782845eb849318e2b89eaa364a8e5 as hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/604782845eb849318e2b89eaa364a8e5 2023-07-23 05:11:20,980 INFO [M:0;jenkins-hbase4:34969] regionserver.HStore(1080): Added hdfs://localhost:37269/user/jenkins/test-data/cd7773b2-fd48-bfe2-9797-a48460248e72/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/604782845eb849318e2b89eaa364a8e5, entries=22, sequenceid=175, filesize=11.1 K 2023-07-23 05:11:20,981 INFO [M:0;jenkins-hbase4:34969] regionserver.HRegion(2948): Finished flush of dataSize ~76.18 KB/78012, heapSize ~90.60 KB/92776, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 25ms, sequenceid=175, compaction requested=false 2023-07-23 05:11:20,984 INFO [M:0;jenkins-hbase4:34969] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-23 05:11:20,984 DEBUG [M:0;jenkins-hbase4:34969] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-23 05:11:20,988 INFO [M:0;jenkins-hbase4:34969] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-23 05:11:20,988 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-23 05:11:20,988 INFO [M:0;jenkins-hbase4:34969] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34969 2023-07-23 05:11:20,991 DEBUG [M:0;jenkins-hbase4:34969] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,34969,1690089075958 already deleted, retry=false 2023-07-23 05:11:21,002 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:21,002 INFO [RS:1;jenkins-hbase4:43649] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43649,1690089076297; zookeeper connection closed. 2023-07-23 05:11:21,002 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:43649-0x101909737dc0002, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:21,003 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@64f320dc] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@64f320dc 2023-07-23 05:11:21,102 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:21,102 INFO [M:0;jenkins-hbase4:34969] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34969,1690089075958; zookeeper connection closed. 2023-07-23 05:11:21,102 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): master:34969-0x101909737dc0000, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:21,203 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:21,203 INFO [RS:2;jenkins-hbase4:44971] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44971,1690089076453; zookeeper connection closed. 2023-07-23 05:11:21,203 DEBUG [Listener at localhost/46717-EventThread] zookeeper.ZKWatcher(600): regionserver:44971-0x101909737dc0003, quorum=127.0.0.1:51330, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-23 05:11:21,203 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@43efe842] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@43efe842 2023-07-23 05:11:21,203 INFO [Listener at localhost/46717] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-23 05:11:21,203 WARN [Listener at localhost/46717] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 05:11:21,207 INFO [Listener at localhost/46717] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 05:11:21,310 WARN [BP-1197641724-172.31.14.131-1690089075257 heartbeating to localhost/127.0.0.1:37269] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 05:11:21,310 WARN [BP-1197641724-172.31.14.131-1690089075257 heartbeating to localhost/127.0.0.1:37269] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1197641724-172.31.14.131-1690089075257 (Datanode Uuid 876a6bf7-ca6c-4d8d-90b5-b7bcc7b9b9ff) service to localhost/127.0.0.1:37269 2023-07-23 05:11:21,311 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/dfs/data/data5/current/BP-1197641724-172.31.14.131-1690089075257] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 05:11:21,311 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/dfs/data/data6/current/BP-1197641724-172.31.14.131-1690089075257] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 05:11:21,312 WARN [Listener at localhost/46717] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 05:11:21,315 INFO [Listener at localhost/46717] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 05:11:21,417 WARN [BP-1197641724-172.31.14.131-1690089075257 heartbeating to localhost/127.0.0.1:37269] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 05:11:21,417 WARN [BP-1197641724-172.31.14.131-1690089075257 heartbeating to localhost/127.0.0.1:37269] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1197641724-172.31.14.131-1690089075257 (Datanode Uuid 518cc289-9869-4810-8163-4679c2187ddc) service to localhost/127.0.0.1:37269 2023-07-23 05:11:21,418 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/dfs/data/data3/current/BP-1197641724-172.31.14.131-1690089075257] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 05:11:21,418 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/dfs/data/data4/current/BP-1197641724-172.31.14.131-1690089075257] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 05:11:21,419 WARN [Listener at localhost/46717] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-23 05:11:21,421 INFO [Listener at localhost/46717] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 05:11:21,524 WARN [BP-1197641724-172.31.14.131-1690089075257 heartbeating to localhost/127.0.0.1:37269] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-23 05:11:21,524 WARN [BP-1197641724-172.31.14.131-1690089075257 heartbeating to localhost/127.0.0.1:37269] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1197641724-172.31.14.131-1690089075257 (Datanode Uuid 23ebb0ec-1e0a-4f8d-b243-54b97e902f2b) service to localhost/127.0.0.1:37269 2023-07-23 05:11:21,525 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/dfs/data/data1/current/BP-1197641724-172.31.14.131-1690089075257] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 05:11:21,525 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/4475f562-fdf4-8504-cc13-283c0726a5fa/cluster_01bbd2d4-42bc-4466-def5-cf252d6ee33b/dfs/data/data2/current/BP-1197641724-172.31.14.131-1690089075257] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-23 05:11:21,537 INFO [Listener at localhost/46717] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-23 05:11:21,650 INFO [Listener at localhost/46717] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-23 05:11:21,675 INFO [Listener at localhost/46717] hbase.HBaseTestingUtility(1293): Minicluster is down