2023-07-22 18:11:06,843 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e 2023-07-22 18:11:06,860 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-22 18:11:06,875 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-22 18:11:06,876 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/cluster_3816f725-a9d6-dfeb-f749-0fa8036d8a95, deleteOnExit=true 2023-07-22 18:11:06,876 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-22 18:11:06,877 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/test.cache.data in system properties and HBase conf 2023-07-22 18:11:06,877 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/hadoop.tmp.dir in system properties and HBase conf 2023-07-22 18:11:06,878 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/hadoop.log.dir in system properties and HBase conf 2023-07-22 18:11:06,878 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-22 18:11:06,879 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-22 18:11:06,879 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-22 18:11:06,996 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-22 18:11:07,448 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-22 18:11:07,453 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-22 18:11:07,454 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-22 18:11:07,454 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-22 18:11:07,455 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-22 18:11:07,455 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-22 18:11:07,456 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-22 18:11:07,456 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-22 18:11:07,456 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-22 18:11:07,457 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-22 18:11:07,457 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/nfs.dump.dir in system properties and HBase conf 2023-07-22 18:11:07,458 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/java.io.tmpdir in system properties and HBase conf 2023-07-22 18:11:07,458 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-22 18:11:07,459 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-22 18:11:07,459 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-22 18:11:08,057 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-22 18:11:08,061 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-22 18:11:08,345 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-22 18:11:08,522 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-22 18:11:08,538 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 18:11:08,619 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 18:11:08,656 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/java.io.tmpdir/Jetty_localhost_45601_hdfs____.awtfwg/webapp 2023-07-22 18:11:08,798 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:45601 2023-07-22 18:11:08,808 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-22 18:11:08,808 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-22 18:11:09,361 WARN [Listener at localhost/43335] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 18:11:09,448 WARN [Listener at localhost/43335] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-22 18:11:09,472 WARN [Listener at localhost/43335] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 18:11:09,480 INFO [Listener at localhost/43335] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 18:11:09,486 INFO [Listener at localhost/43335] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/java.io.tmpdir/Jetty_localhost_41319_datanode____rgkyjp/webapp 2023-07-22 18:11:09,644 INFO [Listener at localhost/43335] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41319 2023-07-22 18:11:10,090 WARN [Listener at localhost/35247] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 18:11:10,134 WARN [Listener at localhost/35247] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-22 18:11:10,140 WARN [Listener at localhost/35247] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 18:11:10,141 INFO [Listener at localhost/35247] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 18:11:10,152 INFO [Listener at localhost/35247] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/java.io.tmpdir/Jetty_localhost_40515_datanode____y5a4xw/webapp 2023-07-22 18:11:10,267 INFO [Listener at localhost/35247] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40515 2023-07-22 18:11:10,283 WARN [Listener at localhost/38231] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 18:11:10,305 WARN [Listener at localhost/38231] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-22 18:11:10,309 WARN [Listener at localhost/38231] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 18:11:10,311 INFO [Listener at localhost/38231] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 18:11:10,319 INFO [Listener at localhost/38231] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/java.io.tmpdir/Jetty_localhost_33815_datanode____.9sreal/webapp 2023-07-22 18:11:10,449 INFO [Listener at localhost/38231] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33815 2023-07-22 18:11:10,460 WARN [Listener at localhost/37829] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 18:11:10,646 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa35b7511157c2a86: Processing first storage report for DS-88e59944-cd8d-42aa-8795-e5b7a0303e3d from datanode 6bb7a2b6-fd49-4764-9407-0ebf19b51997 2023-07-22 18:11:10,647 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa35b7511157c2a86: from storage DS-88e59944-cd8d-42aa-8795-e5b7a0303e3d node DatanodeRegistration(127.0.0.1:46673, datanodeUuid=6bb7a2b6-fd49-4764-9407-0ebf19b51997, infoPort=45555, infoSecurePort=0, ipcPort=37829, storageInfo=lv=-57;cid=testClusterID;nsid=1294877175;c=1690049468130), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-22 18:11:10,647 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb4fe998efad65efe: Processing first storage report for DS-73ccb354-3fc7-4c94-8af0-c23432cafde2 from datanode 299bddcc-4f07-44f7-9457-659925c68d26 2023-07-22 18:11:10,647 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb4fe998efad65efe: from storage DS-73ccb354-3fc7-4c94-8af0-c23432cafde2 node DatanodeRegistration(127.0.0.1:39251, datanodeUuid=299bddcc-4f07-44f7-9457-659925c68d26, infoPort=42619, infoSecurePort=0, ipcPort=38231, storageInfo=lv=-57;cid=testClusterID;nsid=1294877175;c=1690049468130), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 18:11:10,648 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb3021c6d647bd7f4: Processing first storage report for DS-a07f8497-4121-4834-b6d4-6fdd9a2861b6 from datanode 84d6b383-b03f-4f0f-b53d-ec1b4939f810 2023-07-22 18:11:10,648 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb3021c6d647bd7f4: from storage DS-a07f8497-4121-4834-b6d4-6fdd9a2861b6 node DatanodeRegistration(127.0.0.1:32801, datanodeUuid=84d6b383-b03f-4f0f-b53d-ec1b4939f810, infoPort=41221, infoSecurePort=0, ipcPort=35247, storageInfo=lv=-57;cid=testClusterID;nsid=1294877175;c=1690049468130), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 18:11:10,648 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa35b7511157c2a86: Processing first storage report for DS-01217aef-bbd8-45f6-8574-8fbf76d29686 from datanode 6bb7a2b6-fd49-4764-9407-0ebf19b51997 2023-07-22 18:11:10,648 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa35b7511157c2a86: from storage DS-01217aef-bbd8-45f6-8574-8fbf76d29686 node DatanodeRegistration(127.0.0.1:46673, datanodeUuid=6bb7a2b6-fd49-4764-9407-0ebf19b51997, infoPort=45555, infoSecurePort=0, ipcPort=37829, storageInfo=lv=-57;cid=testClusterID;nsid=1294877175;c=1690049468130), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 18:11:10,648 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb4fe998efad65efe: Processing first storage report for DS-97895b2e-8da8-4cec-9310-1076c80af29a from datanode 299bddcc-4f07-44f7-9457-659925c68d26 2023-07-22 18:11:10,648 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb4fe998efad65efe: from storage DS-97895b2e-8da8-4cec-9310-1076c80af29a node DatanodeRegistration(127.0.0.1:39251, datanodeUuid=299bddcc-4f07-44f7-9457-659925c68d26, infoPort=42619, infoSecurePort=0, ipcPort=38231, storageInfo=lv=-57;cid=testClusterID;nsid=1294877175;c=1690049468130), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 18:11:10,648 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb3021c6d647bd7f4: Processing first storage report for DS-20336a2c-0cfd-4e19-a132-9f79554e9973 from datanode 84d6b383-b03f-4f0f-b53d-ec1b4939f810 2023-07-22 18:11:10,648 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb3021c6d647bd7f4: from storage DS-20336a2c-0cfd-4e19-a132-9f79554e9973 node DatanodeRegistration(127.0.0.1:32801, datanodeUuid=84d6b383-b03f-4f0f-b53d-ec1b4939f810, infoPort=41221, infoSecurePort=0, ipcPort=35247, storageInfo=lv=-57;cid=testClusterID;nsid=1294877175;c=1690049468130), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 18:11:10,840 DEBUG [Listener at localhost/37829] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e 2023-07-22 18:11:10,908 INFO [Listener at localhost/37829] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/cluster_3816f725-a9d6-dfeb-f749-0fa8036d8a95/zookeeper_0, clientPort=62144, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/cluster_3816f725-a9d6-dfeb-f749-0fa8036d8a95/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/cluster_3816f725-a9d6-dfeb-f749-0fa8036d8a95/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-22 18:11:10,921 INFO [Listener at localhost/37829] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=62144 2023-07-22 18:11:10,929 INFO [Listener at localhost/37829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:10,930 INFO [Listener at localhost/37829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:11,592 INFO [Listener at localhost/37829] util.FSUtils(471): Created version file at hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf with version=8 2023-07-22 18:11:11,592 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/hbase-staging 2023-07-22 18:11:11,600 DEBUG [Listener at localhost/37829] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-22 18:11:11,601 DEBUG [Listener at localhost/37829] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-22 18:11:11,601 DEBUG [Listener at localhost/37829] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-22 18:11:11,601 DEBUG [Listener at localhost/37829] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-22 18:11:11,964 INFO [Listener at localhost/37829] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-22 18:11:12,556 INFO [Listener at localhost/37829] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 18:11:12,608 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:12,608 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:12,609 INFO [Listener at localhost/37829] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 18:11:12,609 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:12,609 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 18:11:12,772 INFO [Listener at localhost/37829] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 18:11:12,862 DEBUG [Listener at localhost/37829] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-22 18:11:12,970 INFO [Listener at localhost/37829] ipc.NettyRpcServer(120): Bind to /172.31.14.131:40289 2023-07-22 18:11:12,986 INFO [Listener at localhost/37829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:12,988 INFO [Listener at localhost/37829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:13,014 INFO [Listener at localhost/37829] zookeeper.RecoverableZooKeeper(93): Process identifier=master:40289 connecting to ZooKeeper ensemble=127.0.0.1:62144 2023-07-22 18:11:13,063 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:402890x0, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 18:11:13,067 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:40289-0x1018e3ae4b00000 connected 2023-07-22 18:11:13,103 DEBUG [Listener at localhost/37829] zookeeper.ZKUtil(164): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 18:11:13,104 DEBUG [Listener at localhost/37829] zookeeper.ZKUtil(164): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:13,111 DEBUG [Listener at localhost/37829] zookeeper.ZKUtil(164): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 18:11:13,122 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=40289 2023-07-22 18:11:13,123 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=40289 2023-07-22 18:11:13,126 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=40289 2023-07-22 18:11:13,127 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=40289 2023-07-22 18:11:13,128 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=40289 2023-07-22 18:11:13,169 INFO [Listener at localhost/37829] log.Log(170): Logging initialized @7121ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-22 18:11:13,318 INFO [Listener at localhost/37829] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 18:11:13,319 INFO [Listener at localhost/37829] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 18:11:13,320 INFO [Listener at localhost/37829] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 18:11:13,322 INFO [Listener at localhost/37829] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-22 18:11:13,322 INFO [Listener at localhost/37829] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 18:11:13,322 INFO [Listener at localhost/37829] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 18:11:13,326 INFO [Listener at localhost/37829] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 18:11:13,387 INFO [Listener at localhost/37829] http.HttpServer(1146): Jetty bound to port 43673 2023-07-22 18:11:13,388 INFO [Listener at localhost/37829] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 18:11:13,420 INFO [Listener at localhost/37829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:13,423 INFO [Listener at localhost/37829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@563e1db6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/hadoop.log.dir/,AVAILABLE} 2023-07-22 18:11:13,424 INFO [Listener at localhost/37829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:13,425 INFO [Listener at localhost/37829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7b463d55{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 18:11:13,655 INFO [Listener at localhost/37829] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 18:11:13,673 INFO [Listener at localhost/37829] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 18:11:13,673 INFO [Listener at localhost/37829] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 18:11:13,676 INFO [Listener at localhost/37829] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-22 18:11:13,685 INFO [Listener at localhost/37829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:13,722 INFO [Listener at localhost/37829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@66df7ad1{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/java.io.tmpdir/jetty-0_0_0_0-43673-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1341931090684305617/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-22 18:11:13,738 INFO [Listener at localhost/37829] server.AbstractConnector(333): Started ServerConnector@631e341c{HTTP/1.1, (http/1.1)}{0.0.0.0:43673} 2023-07-22 18:11:13,739 INFO [Listener at localhost/37829] server.Server(415): Started @7691ms 2023-07-22 18:11:13,742 INFO [Listener at localhost/37829] master.HMaster(444): hbase.rootdir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf, hbase.cluster.distributed=false 2023-07-22 18:11:13,845 INFO [Listener at localhost/37829] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 18:11:13,846 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:13,846 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:13,846 INFO [Listener at localhost/37829] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 18:11:13,846 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:13,846 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 18:11:13,852 INFO [Listener at localhost/37829] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 18:11:13,856 INFO [Listener at localhost/37829] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33411 2023-07-22 18:11:13,859 INFO [Listener at localhost/37829] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 18:11:13,867 DEBUG [Listener at localhost/37829] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 18:11:13,868 INFO [Listener at localhost/37829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:13,871 INFO [Listener at localhost/37829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:13,873 INFO [Listener at localhost/37829] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33411 connecting to ZooKeeper ensemble=127.0.0.1:62144 2023-07-22 18:11:13,883 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:334110x0, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 18:11:13,884 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33411-0x1018e3ae4b00001 connected 2023-07-22 18:11:13,885 DEBUG [Listener at localhost/37829] zookeeper.ZKUtil(164): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 18:11:13,886 DEBUG [Listener at localhost/37829] zookeeper.ZKUtil(164): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:13,887 DEBUG [Listener at localhost/37829] zookeeper.ZKUtil(164): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 18:11:13,895 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33411 2023-07-22 18:11:13,895 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33411 2023-07-22 18:11:13,895 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33411 2023-07-22 18:11:13,899 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33411 2023-07-22 18:11:13,899 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33411 2023-07-22 18:11:13,902 INFO [Listener at localhost/37829] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 18:11:13,902 INFO [Listener at localhost/37829] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 18:11:13,902 INFO [Listener at localhost/37829] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 18:11:13,904 INFO [Listener at localhost/37829] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 18:11:13,904 INFO [Listener at localhost/37829] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 18:11:13,904 INFO [Listener at localhost/37829] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 18:11:13,904 INFO [Listener at localhost/37829] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 18:11:13,906 INFO [Listener at localhost/37829] http.HttpServer(1146): Jetty bound to port 34337 2023-07-22 18:11:13,907 INFO [Listener at localhost/37829] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 18:11:13,914 INFO [Listener at localhost/37829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:13,914 INFO [Listener at localhost/37829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2ffb745f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/hadoop.log.dir/,AVAILABLE} 2023-07-22 18:11:13,915 INFO [Listener at localhost/37829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:13,915 INFO [Listener at localhost/37829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@53b9762b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 18:11:14,038 INFO [Listener at localhost/37829] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 18:11:14,040 INFO [Listener at localhost/37829] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 18:11:14,041 INFO [Listener at localhost/37829] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 18:11:14,041 INFO [Listener at localhost/37829] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-22 18:11:14,043 INFO [Listener at localhost/37829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:14,046 INFO [Listener at localhost/37829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@6a9e2012{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/java.io.tmpdir/jetty-0_0_0_0-34337-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2277058164801695518/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:14,048 INFO [Listener at localhost/37829] server.AbstractConnector(333): Started ServerConnector@69156046{HTTP/1.1, (http/1.1)}{0.0.0.0:34337} 2023-07-22 18:11:14,048 INFO [Listener at localhost/37829] server.Server(415): Started @8000ms 2023-07-22 18:11:14,061 INFO [Listener at localhost/37829] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 18:11:14,062 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:14,062 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:14,063 INFO [Listener at localhost/37829] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 18:11:14,063 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:14,063 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 18:11:14,063 INFO [Listener at localhost/37829] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 18:11:14,065 INFO [Listener at localhost/37829] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38977 2023-07-22 18:11:14,065 INFO [Listener at localhost/37829] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 18:11:14,066 DEBUG [Listener at localhost/37829] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 18:11:14,067 INFO [Listener at localhost/37829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:14,069 INFO [Listener at localhost/37829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:14,070 INFO [Listener at localhost/37829] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38977 connecting to ZooKeeper ensemble=127.0.0.1:62144 2023-07-22 18:11:14,078 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:389770x0, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 18:11:14,079 DEBUG [Listener at localhost/37829] zookeeper.ZKUtil(164): regionserver:389770x0, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 18:11:14,080 DEBUG [Listener at localhost/37829] zookeeper.ZKUtil(164): regionserver:389770x0, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:14,080 DEBUG [Listener at localhost/37829] zookeeper.ZKUtil(164): regionserver:389770x0, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 18:11:14,082 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38977 2023-07-22 18:11:14,082 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38977 2023-07-22 18:11:14,083 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38977-0x1018e3ae4b00002 connected 2023-07-22 18:11:14,086 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38977 2023-07-22 18:11:14,087 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38977 2023-07-22 18:11:14,087 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38977 2023-07-22 18:11:14,090 INFO [Listener at localhost/37829] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 18:11:14,090 INFO [Listener at localhost/37829] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 18:11:14,090 INFO [Listener at localhost/37829] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 18:11:14,091 INFO [Listener at localhost/37829] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 18:11:14,091 INFO [Listener at localhost/37829] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 18:11:14,091 INFO [Listener at localhost/37829] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 18:11:14,091 INFO [Listener at localhost/37829] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 18:11:14,092 INFO [Listener at localhost/37829] http.HttpServer(1146): Jetty bound to port 36413 2023-07-22 18:11:14,092 INFO [Listener at localhost/37829] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 18:11:14,099 INFO [Listener at localhost/37829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:14,099 INFO [Listener at localhost/37829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@19459c3b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/hadoop.log.dir/,AVAILABLE} 2023-07-22 18:11:14,099 INFO [Listener at localhost/37829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:14,100 INFO [Listener at localhost/37829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@33446a5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 18:11:14,269 INFO [Listener at localhost/37829] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 18:11:14,271 INFO [Listener at localhost/37829] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 18:11:14,271 INFO [Listener at localhost/37829] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 18:11:14,271 INFO [Listener at localhost/37829] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-22 18:11:14,273 INFO [Listener at localhost/37829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:14,274 INFO [Listener at localhost/37829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1b56eac1{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/java.io.tmpdir/jetty-0_0_0_0-36413-hbase-server-2_4_18-SNAPSHOT_jar-_-any-27668991965889084/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:14,275 INFO [Listener at localhost/37829] server.AbstractConnector(333): Started ServerConnector@5d0a3d54{HTTP/1.1, (http/1.1)}{0.0.0.0:36413} 2023-07-22 18:11:14,276 INFO [Listener at localhost/37829] server.Server(415): Started @8228ms 2023-07-22 18:11:14,292 INFO [Listener at localhost/37829] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 18:11:14,292 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:14,292 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:14,292 INFO [Listener at localhost/37829] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 18:11:14,292 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:14,292 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 18:11:14,293 INFO [Listener at localhost/37829] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 18:11:14,295 INFO [Listener at localhost/37829] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38507 2023-07-22 18:11:14,295 INFO [Listener at localhost/37829] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 18:11:14,299 DEBUG [Listener at localhost/37829] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 18:11:14,300 INFO [Listener at localhost/37829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:14,302 INFO [Listener at localhost/37829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:14,303 INFO [Listener at localhost/37829] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38507 connecting to ZooKeeper ensemble=127.0.0.1:62144 2023-07-22 18:11:14,307 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:385070x0, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 18:11:14,308 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38507-0x1018e3ae4b00003 connected 2023-07-22 18:11:14,308 DEBUG [Listener at localhost/37829] zookeeper.ZKUtil(164): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 18:11:14,309 DEBUG [Listener at localhost/37829] zookeeper.ZKUtil(164): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:14,310 DEBUG [Listener at localhost/37829] zookeeper.ZKUtil(164): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 18:11:14,310 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38507 2023-07-22 18:11:14,314 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38507 2023-07-22 18:11:14,315 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38507 2023-07-22 18:11:14,317 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38507 2023-07-22 18:11:14,317 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38507 2023-07-22 18:11:14,319 INFO [Listener at localhost/37829] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 18:11:14,319 INFO [Listener at localhost/37829] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 18:11:14,319 INFO [Listener at localhost/37829] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 18:11:14,320 INFO [Listener at localhost/37829] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 18:11:14,320 INFO [Listener at localhost/37829] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 18:11:14,320 INFO [Listener at localhost/37829] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 18:11:14,320 INFO [Listener at localhost/37829] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 18:11:14,321 INFO [Listener at localhost/37829] http.HttpServer(1146): Jetty bound to port 34885 2023-07-22 18:11:14,322 INFO [Listener at localhost/37829] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 18:11:14,331 INFO [Listener at localhost/37829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:14,331 INFO [Listener at localhost/37829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5f20ff62{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/hadoop.log.dir/,AVAILABLE} 2023-07-22 18:11:14,332 INFO [Listener at localhost/37829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:14,332 INFO [Listener at localhost/37829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2855a58d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 18:11:14,448 INFO [Listener at localhost/37829] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 18:11:14,449 INFO [Listener at localhost/37829] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 18:11:14,449 INFO [Listener at localhost/37829] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 18:11:14,450 INFO [Listener at localhost/37829] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-22 18:11:14,451 INFO [Listener at localhost/37829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:14,452 INFO [Listener at localhost/37829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7cc51cf3{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/java.io.tmpdir/jetty-0_0_0_0-34885-hbase-server-2_4_18-SNAPSHOT_jar-_-any-9066046144653784447/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:14,454 INFO [Listener at localhost/37829] server.AbstractConnector(333): Started ServerConnector@3023e605{HTTP/1.1, (http/1.1)}{0.0.0.0:34885} 2023-07-22 18:11:14,454 INFO [Listener at localhost/37829] server.Server(415): Started @8407ms 2023-07-22 18:11:14,461 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 18:11:14,468 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@5e228df9{HTTP/1.1, (http/1.1)}{0.0.0.0:46377} 2023-07-22 18:11:14,469 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8421ms 2023-07-22 18:11:14,469 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,40289,1690049471773 2023-07-22 18:11:14,482 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-22 18:11:14,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,40289,1690049471773 2023-07-22 18:11:14,503 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:38977-0x1018e3ae4b00002, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 18:11:14,503 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 18:11:14,503 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 18:11:14,504 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 18:11:14,504 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:14,506 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-22 18:11:14,507 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,40289,1690049471773 from backup master directory 2023-07-22 18:11:14,507 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-22 18:11:14,513 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,40289,1690049471773 2023-07-22 18:11:14,513 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-22 18:11:14,514 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 18:11:14,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,40289,1690049471773 2023-07-22 18:11:14,518 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-22 18:11:14,520 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-22 18:11:14,641 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/hbase.id with ID: 5cc371fe-a800-412b-aae5-b6a77a194597 2023-07-22 18:11:14,685 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:14,700 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:14,765 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x20eaf84c to 127.0.0.1:62144 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:14,799 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@708bf106, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:14,835 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:14,837 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-22 18:11:14,862 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-22 18:11:14,862 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-22 18:11:14,865 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-22 18:11:14,871 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-22 18:11:14,873 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 18:11:14,922 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/MasterData/data/master/store-tmp 2023-07-22 18:11:14,971 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:14,971 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-22 18:11:14,971 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:14,971 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:14,972 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 1 ms 2023-07-22 18:11:14,972 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:14,972 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:14,972 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-22 18:11:14,973 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/MasterData/WALs/jenkins-hbase4.apache.org,40289,1690049471773 2023-07-22 18:11:15,000 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C40289%2C1690049471773, suffix=, logDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/MasterData/WALs/jenkins-hbase4.apache.org,40289,1690049471773, archiveDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/MasterData/oldWALs, maxLogs=10 2023-07-22 18:11:15,079 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32801,DS-a07f8497-4121-4834-b6d4-6fdd9a2861b6,DISK] 2023-07-22 18:11:15,079 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39251,DS-73ccb354-3fc7-4c94-8af0-c23432cafde2,DISK] 2023-07-22 18:11:15,079 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46673,DS-88e59944-cd8d-42aa-8795-e5b7a0303e3d,DISK] 2023-07-22 18:11:15,089 DEBUG [RS-EventLoopGroup-5-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-22 18:11:15,175 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/MasterData/WALs/jenkins-hbase4.apache.org,40289,1690049471773/jenkins-hbase4.apache.org%2C40289%2C1690049471773.1690049475012 2023-07-22 18:11:15,175 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32801,DS-a07f8497-4121-4834-b6d4-6fdd9a2861b6,DISK], DatanodeInfoWithStorage[127.0.0.1:46673,DS-88e59944-cd8d-42aa-8795-e5b7a0303e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:39251,DS-73ccb354-3fc7-4c94-8af0-c23432cafde2,DISK]] 2023-07-22 18:11:15,176 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:15,176 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:15,181 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-22 18:11:15,182 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-22 18:11:15,250 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-22 18:11:15,257 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-22 18:11:15,292 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-22 18:11:15,305 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:15,311 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-22 18:11:15,314 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-22 18:11:15,339 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-22 18:11:15,347 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:15,348 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11428608320, jitterRate=0.06437209248542786}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:15,348 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-22 18:11:15,356 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-22 18:11:15,380 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-22 18:11:15,380 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-22 18:11:15,383 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-22 18:11:15,384 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-22 18:11:15,422 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 37 msec 2023-07-22 18:11:15,422 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-22 18:11:15,455 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-22 18:11:15,463 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-22 18:11:15,470 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-22 18:11:15,476 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-22 18:11:15,480 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-22 18:11:15,484 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:15,485 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-22 18:11:15,485 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-22 18:11:15,500 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-22 18:11:15,507 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:15,507 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:38977-0x1018e3ae4b00002, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:15,507 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:15,507 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:15,507 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:15,510 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,40289,1690049471773, sessionid=0x1018e3ae4b00000, setting cluster-up flag (Was=false) 2023-07-22 18:11:15,550 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:15,557 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-22 18:11:15,559 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40289,1690049471773 2023-07-22 18:11:15,564 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:15,570 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-22 18:11:15,571 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,40289,1690049471773 2023-07-22 18:11:15,574 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.hbase-snapshot/.tmp 2023-07-22 18:11:15,660 INFO [RS:0;jenkins-hbase4:33411] regionserver.HRegionServer(951): ClusterId : 5cc371fe-a800-412b-aae5-b6a77a194597 2023-07-22 18:11:15,663 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-22 18:11:15,688 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-22 18:11:15,701 DEBUG [RS:0;jenkins-hbase4:33411] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 18:11:15,701 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40289,1690049471773] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 18:11:15,701 INFO [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer(951): ClusterId : 5cc371fe-a800-412b-aae5-b6a77a194597 2023-07-22 18:11:15,704 DEBUG [RS:1;jenkins-hbase4:38977] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 18:11:15,707 INFO [RS:2;jenkins-hbase4:38507] regionserver.HRegionServer(951): ClusterId : 5cc371fe-a800-412b-aae5-b6a77a194597 2023-07-22 18:11:15,707 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-22 18:11:15,708 DEBUG [RS:2;jenkins-hbase4:38507] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 18:11:15,708 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-22 18:11:15,714 DEBUG [RS:0;jenkins-hbase4:33411] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 18:11:15,714 DEBUG [RS:2;jenkins-hbase4:38507] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 18:11:15,714 DEBUG [RS:2;jenkins-hbase4:38507] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 18:11:15,714 DEBUG [RS:0;jenkins-hbase4:33411] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 18:11:15,715 DEBUG [RS:1;jenkins-hbase4:38977] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 18:11:15,715 DEBUG [RS:1;jenkins-hbase4:38977] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 18:11:15,719 DEBUG [RS:2;jenkins-hbase4:38507] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 18:11:15,719 DEBUG [RS:1;jenkins-hbase4:38977] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 18:11:15,719 DEBUG [RS:0;jenkins-hbase4:33411] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 18:11:15,723 DEBUG [RS:0;jenkins-hbase4:33411] zookeeper.ReadOnlyZKClient(139): Connect 0x4053758f to 127.0.0.1:62144 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:15,723 DEBUG [RS:2;jenkins-hbase4:38507] zookeeper.ReadOnlyZKClient(139): Connect 0x5d48fa3b to 127.0.0.1:62144 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:15,723 DEBUG [RS:1;jenkins-hbase4:38977] zookeeper.ReadOnlyZKClient(139): Connect 0x3b547dda to 127.0.0.1:62144 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:15,755 DEBUG [RS:0;jenkins-hbase4:33411] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1847ca6b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:15,757 DEBUG [RS:0;jenkins-hbase4:33411] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@44fad659, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 18:11:15,758 DEBUG [RS:2;jenkins-hbase4:38507] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@184a6ff6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:15,758 DEBUG [RS:1;jenkins-hbase4:38977] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@359384c0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:15,759 DEBUG [RS:2;jenkins-hbase4:38507] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4ba38f7d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 18:11:15,759 DEBUG [RS:1;jenkins-hbase4:38977] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@74ae8efb, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 18:11:15,789 DEBUG [RS:2;jenkins-hbase4:38507] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:38507 2023-07-22 18:11:15,790 DEBUG [RS:0;jenkins-hbase4:33411] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:33411 2023-07-22 18:11:15,789 DEBUG [RS:1;jenkins-hbase4:38977] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:38977 2023-07-22 18:11:15,797 INFO [RS:1;jenkins-hbase4:38977] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 18:11:15,797 INFO [RS:2;jenkins-hbase4:38507] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 18:11:15,797 INFO [RS:0;jenkins-hbase4:33411] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 18:11:15,798 INFO [RS:0;jenkins-hbase4:33411] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 18:11:15,798 INFO [RS:1;jenkins-hbase4:38977] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 18:11:15,798 DEBUG [RS:0;jenkins-hbase4:33411] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 18:11:15,798 INFO [RS:2;jenkins-hbase4:38507] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 18:11:15,798 DEBUG [RS:2;jenkins-hbase4:38507] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 18:11:15,798 DEBUG [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 18:11:15,803 INFO [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40289,1690049471773 with isa=jenkins-hbase4.apache.org/172.31.14.131:38977, startcode=1690049474061 2023-07-22 18:11:15,804 INFO [RS:2;jenkins-hbase4:38507] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40289,1690049471773 with isa=jenkins-hbase4.apache.org/172.31.14.131:38507, startcode=1690049474291 2023-07-22 18:11:15,806 INFO [RS:0;jenkins-hbase4:33411] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40289,1690049471773 with isa=jenkins-hbase4.apache.org/172.31.14.131:33411, startcode=1690049473844 2023-07-22 18:11:15,823 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-22 18:11:15,829 DEBUG [RS:0;jenkins-hbase4:33411] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 18:11:15,830 DEBUG [RS:1;jenkins-hbase4:38977] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 18:11:15,829 DEBUG [RS:2;jenkins-hbase4:38507] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 18:11:15,890 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-22 18:11:15,897 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-22 18:11:15,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-22 18:11:15,899 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-22 18:11:15,901 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 18:11:15,901 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 18:11:15,901 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 18:11:15,901 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 18:11:15,901 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-22 18:11:15,901 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:15,901 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 18:11:15,901 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:15,908 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51359, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 18:11:15,908 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39969, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 18:11:15,908 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38701, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 18:11:15,931 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690049505931 2023-07-22 18:11:15,943 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:15,947 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-22 18:11:15,947 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-22 18:11:15,951 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-22 18:11:15,954 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:15,960 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-22 18:11:15,975 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:15,985 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:15,986 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-22 18:11:15,987 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-22 18:11:15,987 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-22 18:11:15,987 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-22 18:11:15,988 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:15,989 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-22 18:11:15,992 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-22 18:11:15,992 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-22 18:11:16,002 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-22 18:11:16,005 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-22 18:11:16,007 DEBUG [RS:2;jenkins-hbase4:38507] regionserver.HRegionServer(2830): Master is not running yet 2023-07-22 18:11:16,007 WARN [RS:2;jenkins-hbase4:38507] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-22 18:11:16,008 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690049476007,5,FailOnTimeoutGroup] 2023-07-22 18:11:16,008 DEBUG [RS:0;jenkins-hbase4:33411] regionserver.HRegionServer(2830): Master is not running yet 2023-07-22 18:11:16,009 WARN [RS:0;jenkins-hbase4:33411] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-22 18:11:16,010 DEBUG [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer(2830): Master is not running yet 2023-07-22 18:11:16,011 WARN [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-22 18:11:16,015 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690049476008,5,FailOnTimeoutGroup] 2023-07-22 18:11:16,015 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,021 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-22 18:11:16,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,108 INFO [RS:2;jenkins-hbase4:38507] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40289,1690049471773 with isa=jenkins-hbase4.apache.org/172.31.14.131:38507, startcode=1690049474291 2023-07-22 18:11:16,109 INFO [RS:0;jenkins-hbase4:33411] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40289,1690049471773 with isa=jenkins-hbase4.apache.org/172.31.14.131:33411, startcode=1690049473844 2023-07-22 18:11:16,111 INFO [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40289,1690049471773 with isa=jenkins-hbase4.apache.org/172.31.14.131:38977, startcode=1690049474061 2023-07-22 18:11:16,123 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40289] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:16,125 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40289,1690049471773] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 18:11:16,126 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40289,1690049471773] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-22 18:11:16,134 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40289] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:16,136 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40289,1690049471773] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 18:11:16,137 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40289,1690049471773] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-22 18:11:16,148 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40289] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:16,150 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40289,1690049471773] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 18:11:16,150 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40289,1690049471773] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-22 18:11:16,151 DEBUG [RS:2;jenkins-hbase4:38507] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf 2023-07-22 18:11:16,151 DEBUG [RS:2;jenkins-hbase4:38507] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43335 2023-07-22 18:11:16,151 DEBUG [RS:2;jenkins-hbase4:38507] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43673 2023-07-22 18:11:16,158 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:16,161 DEBUG [RS:0;jenkins-hbase4:33411] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf 2023-07-22 18:11:16,161 DEBUG [RS:0;jenkins-hbase4:33411] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43335 2023-07-22 18:11:16,161 DEBUG [RS:0;jenkins-hbase4:33411] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43673 2023-07-22 18:11:16,163 DEBUG [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf 2023-07-22 18:11:16,163 DEBUG [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43335 2023-07-22 18:11:16,163 DEBUG [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43673 2023-07-22 18:11:16,172 DEBUG [RS:2;jenkins-hbase4:38507] zookeeper.ZKUtil(162): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:16,173 WARN [RS:2;jenkins-hbase4:38507] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 18:11:16,174 INFO [RS:2;jenkins-hbase4:38507] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 18:11:16,175 DEBUG [RS:1;jenkins-hbase4:38977] zookeeper.ZKUtil(162): regionserver:38977-0x1018e3ae4b00002, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:16,175 WARN [RS:1;jenkins-hbase4:38977] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 18:11:16,175 DEBUG [RS:2;jenkins-hbase4:38507] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/WALs/jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:16,176 DEBUG [RS:0;jenkins-hbase4:33411] zookeeper.ZKUtil(162): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:16,176 WARN [RS:0;jenkins-hbase4:33411] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 18:11:16,176 INFO [RS:0;jenkins-hbase4:33411] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 18:11:16,176 DEBUG [RS:0;jenkins-hbase4:33411] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/WALs/jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:16,175 INFO [RS:1;jenkins-hbase4:38977] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 18:11:16,177 DEBUG [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/WALs/jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:16,176 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33411,1690049473844] 2023-07-22 18:11:16,179 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38507,1690049474291] 2023-07-22 18:11:16,179 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:16,179 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38977,1690049474061] 2023-07-22 18:11:16,180 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:16,182 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf 2023-07-22 18:11:16,205 DEBUG [RS:1;jenkins-hbase4:38977] zookeeper.ZKUtil(162): regionserver:38977-0x1018e3ae4b00002, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:16,205 DEBUG [RS:2;jenkins-hbase4:38507] zookeeper.ZKUtil(162): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:16,205 DEBUG [RS:0;jenkins-hbase4:33411] zookeeper.ZKUtil(162): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:16,205 DEBUG [RS:1;jenkins-hbase4:38977] zookeeper.ZKUtil(162): regionserver:38977-0x1018e3ae4b00002, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:16,212 DEBUG [RS:2;jenkins-hbase4:38507] zookeeper.ZKUtil(162): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:16,212 DEBUG [RS:1;jenkins-hbase4:38977] zookeeper.ZKUtil(162): regionserver:38977-0x1018e3ae4b00002, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:16,212 DEBUG [RS:0;jenkins-hbase4:33411] zookeeper.ZKUtil(162): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:16,213 DEBUG [RS:2;jenkins-hbase4:38507] zookeeper.ZKUtil(162): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:16,214 DEBUG [RS:0;jenkins-hbase4:33411] zookeeper.ZKUtil(162): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:16,229 DEBUG [RS:0;jenkins-hbase4:33411] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 18:11:16,229 DEBUG [RS:2;jenkins-hbase4:38507] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 18:11:16,229 DEBUG [RS:1;jenkins-hbase4:38977] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 18:11:16,241 INFO [RS:2;jenkins-hbase4:38507] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 18:11:16,257 INFO [RS:0;jenkins-hbase4:33411] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 18:11:16,257 INFO [RS:1;jenkins-hbase4:38977] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 18:11:16,289 INFO [RS:1;jenkins-hbase4:38977] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 18:11:16,293 INFO [RS:0;jenkins-hbase4:33411] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 18:11:16,293 INFO [RS:2;jenkins-hbase4:38507] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 18:11:16,298 INFO [RS:1;jenkins-hbase4:38977] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 18:11:16,298 INFO [RS:2;jenkins-hbase4:38507] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 18:11:16,299 INFO [RS:1;jenkins-hbase4:38977] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,299 INFO [RS:2;jenkins-hbase4:38507] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,299 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:16,298 INFO [RS:0;jenkins-hbase4:33411] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 18:11:16,300 INFO [RS:0;jenkins-hbase4:33411] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,301 INFO [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 18:11:16,302 INFO [RS:0;jenkins-hbase4:33411] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 18:11:16,307 INFO [RS:2;jenkins-hbase4:38507] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 18:11:16,305 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-22 18:11:16,316 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/info 2023-07-22 18:11:16,317 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-22 18:11:16,320 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:16,321 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-22 18:11:16,322 INFO [RS:0;jenkins-hbase4:33411] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,322 INFO [RS:1;jenkins-hbase4:38977] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,323 DEBUG [RS:0;jenkins-hbase4:33411] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,323 DEBUG [RS:1;jenkins-hbase4:38977] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,323 DEBUG [RS:0;jenkins-hbase4:33411] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,323 DEBUG [RS:1;jenkins-hbase4:38977] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,324 DEBUG [RS:0;jenkins-hbase4:33411] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,324 DEBUG [RS:1;jenkins-hbase4:38977] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,324 DEBUG [RS:0;jenkins-hbase4:33411] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,324 DEBUG [RS:1;jenkins-hbase4:38977] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,324 DEBUG [RS:0;jenkins-hbase4:33411] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,323 INFO [RS:2;jenkins-hbase4:38507] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,324 DEBUG [RS:0;jenkins-hbase4:33411] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 18:11:16,324 DEBUG [RS:1;jenkins-hbase4:38977] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,324 DEBUG [RS:0;jenkins-hbase4:33411] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,324 DEBUG [RS:1;jenkins-hbase4:38977] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 18:11:16,324 DEBUG [RS:2;jenkins-hbase4:38507] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,324 DEBUG [RS:1;jenkins-hbase4:38977] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,324 DEBUG [RS:2;jenkins-hbase4:38507] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,324 DEBUG [RS:0;jenkins-hbase4:33411] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,324 DEBUG [RS:2;jenkins-hbase4:38507] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,324 DEBUG [RS:1;jenkins-hbase4:38977] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,325 DEBUG [RS:2;jenkins-hbase4:38507] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,325 DEBUG [RS:1;jenkins-hbase4:38977] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,325 DEBUG [RS:2;jenkins-hbase4:38507] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,325 DEBUG [RS:1;jenkins-hbase4:38977] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,325 DEBUG [RS:2;jenkins-hbase4:38507] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 18:11:16,325 DEBUG [RS:0;jenkins-hbase4:33411] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,325 DEBUG [RS:2;jenkins-hbase4:38507] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,325 DEBUG [RS:0;jenkins-hbase4:33411] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,325 DEBUG [RS:2;jenkins-hbase4:38507] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,325 DEBUG [RS:2;jenkins-hbase4:38507] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,326 DEBUG [RS:2;jenkins-hbase4:38507] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:16,342 INFO [RS:2;jenkins-hbase4:38507] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,343 INFO [RS:2;jenkins-hbase4:38507] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,343 INFO [RS:2;jenkins-hbase4:38507] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,346 INFO [RS:0;jenkins-hbase4:33411] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,347 INFO [RS:0;jenkins-hbase4:33411] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,347 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/rep_barrier 2023-07-22 18:11:16,347 INFO [RS:0;jenkins-hbase4:33411] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,347 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-22 18:11:16,349 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:16,349 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-22 18:11:16,350 INFO [RS:1;jenkins-hbase4:38977] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,351 INFO [RS:1;jenkins-hbase4:38977] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,351 INFO [RS:1;jenkins-hbase4:38977] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,363 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/table 2023-07-22 18:11:16,364 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-22 18:11:16,366 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:16,368 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740 2023-07-22 18:11:16,370 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740 2023-07-22 18:11:16,376 INFO [RS:1;jenkins-hbase4:38977] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 18:11:16,376 INFO [RS:2;jenkins-hbase4:38507] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 18:11:16,381 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-22 18:11:16,383 INFO [RS:0;jenkins-hbase4:33411] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 18:11:16,384 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-22 18:11:16,389 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:16,390 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11516448160, jitterRate=0.07255281507968903}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-22 18:11:16,390 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-22 18:11:16,390 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-22 18:11:16,390 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-22 18:11:16,390 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-22 18:11:16,390 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-22 18:11:16,390 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-22 18:11:16,392 INFO [RS:1;jenkins-hbase4:38977] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38977,1690049474061-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,392 INFO [RS:2;jenkins-hbase4:38507] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38507,1690049474291-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,392 INFO [RS:0;jenkins-hbase4:33411] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33411,1690049473844-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:16,393 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-22 18:11:16,393 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-22 18:11:16,404 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-22 18:11:16,405 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-22 18:11:16,432 INFO [RS:0;jenkins-hbase4:33411] regionserver.Replication(203): jenkins-hbase4.apache.org,33411,1690049473844 started 2023-07-22 18:11:16,432 INFO [RS:0;jenkins-hbase4:33411] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33411,1690049473844, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33411, sessionid=0x1018e3ae4b00001 2023-07-22 18:11:16,433 DEBUG [RS:0;jenkins-hbase4:33411] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 18:11:16,433 DEBUG [RS:0;jenkins-hbase4:33411] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:16,433 DEBUG [RS:0;jenkins-hbase4:33411] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33411,1690049473844' 2023-07-22 18:11:16,433 DEBUG [RS:0;jenkins-hbase4:33411] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 18:11:16,434 DEBUG [RS:0;jenkins-hbase4:33411] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 18:11:16,435 DEBUG [RS:0;jenkins-hbase4:33411] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 18:11:16,435 DEBUG [RS:0;jenkins-hbase4:33411] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 18:11:16,435 DEBUG [RS:0;jenkins-hbase4:33411] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:16,435 DEBUG [RS:0;jenkins-hbase4:33411] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33411,1690049473844' 2023-07-22 18:11:16,435 DEBUG [RS:0;jenkins-hbase4:33411] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 18:11:16,436 DEBUG [RS:0;jenkins-hbase4:33411] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 18:11:16,436 DEBUG [RS:0;jenkins-hbase4:33411] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 18:11:16,436 INFO [RS:0;jenkins-hbase4:33411] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-22 18:11:16,436 INFO [RS:1;jenkins-hbase4:38977] regionserver.Replication(203): jenkins-hbase4.apache.org,38977,1690049474061 started 2023-07-22 18:11:16,436 INFO [RS:0;jenkins-hbase4:33411] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-22 18:11:16,436 INFO [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38977,1690049474061, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38977, sessionid=0x1018e3ae4b00002 2023-07-22 18:11:16,437 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-22 18:11:16,441 INFO [RS:2;jenkins-hbase4:38507] regionserver.Replication(203): jenkins-hbase4.apache.org,38507,1690049474291 started 2023-07-22 18:11:16,441 DEBUG [RS:1;jenkins-hbase4:38977] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 18:11:16,441 INFO [RS:2;jenkins-hbase4:38507] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38507,1690049474291, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38507, sessionid=0x1018e3ae4b00003 2023-07-22 18:11:16,441 DEBUG [RS:1;jenkins-hbase4:38977] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:16,442 DEBUG [RS:1;jenkins-hbase4:38977] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38977,1690049474061' 2023-07-22 18:11:16,443 DEBUG [RS:1;jenkins-hbase4:38977] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 18:11:16,443 DEBUG [RS:2;jenkins-hbase4:38507] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 18:11:16,443 DEBUG [RS:2;jenkins-hbase4:38507] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:16,443 DEBUG [RS:2;jenkins-hbase4:38507] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38507,1690049474291' 2023-07-22 18:11:16,443 DEBUG [RS:2;jenkins-hbase4:38507] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 18:11:16,443 DEBUG [RS:1;jenkins-hbase4:38977] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 18:11:16,444 DEBUG [RS:2;jenkins-hbase4:38507] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 18:11:16,444 DEBUG [RS:1;jenkins-hbase4:38977] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 18:11:16,444 DEBUG [RS:1;jenkins-hbase4:38977] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 18:11:16,444 DEBUG [RS:1;jenkins-hbase4:38977] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:16,445 DEBUG [RS:1;jenkins-hbase4:38977] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38977,1690049474061' 2023-07-22 18:11:16,445 DEBUG [RS:1;jenkins-hbase4:38977] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 18:11:16,445 DEBUG [RS:2;jenkins-hbase4:38507] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 18:11:16,445 DEBUG [RS:2;jenkins-hbase4:38507] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 18:11:16,445 DEBUG [RS:2;jenkins-hbase4:38507] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:16,445 DEBUG [RS:2;jenkins-hbase4:38507] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38507,1690049474291' 2023-07-22 18:11:16,445 DEBUG [RS:2;jenkins-hbase4:38507] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 18:11:16,446 DEBUG [RS:2;jenkins-hbase4:38507] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 18:11:16,446 DEBUG [RS:1;jenkins-hbase4:38977] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 18:11:16,446 DEBUG [RS:2;jenkins-hbase4:38507] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 18:11:16,447 DEBUG [RS:1;jenkins-hbase4:38977] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 18:11:16,447 INFO [RS:2;jenkins-hbase4:38507] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-22 18:11:16,447 INFO [RS:1;jenkins-hbase4:38977] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-22 18:11:16,447 INFO [RS:1;jenkins-hbase4:38977] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-22 18:11:16,447 INFO [RS:2;jenkins-hbase4:38507] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-22 18:11:16,457 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-22 18:11:16,462 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-22 18:11:16,552 INFO [RS:1;jenkins-hbase4:38977] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38977%2C1690049474061, suffix=, logDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/WALs/jenkins-hbase4.apache.org,38977,1690049474061, archiveDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/oldWALs, maxLogs=32 2023-07-22 18:11:16,560 INFO [RS:2;jenkins-hbase4:38507] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38507%2C1690049474291, suffix=, logDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/WALs/jenkins-hbase4.apache.org,38507,1690049474291, archiveDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/oldWALs, maxLogs=32 2023-07-22 18:11:16,563 INFO [RS:0;jenkins-hbase4:33411] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33411%2C1690049473844, suffix=, logDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/WALs/jenkins-hbase4.apache.org,33411,1690049473844, archiveDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/oldWALs, maxLogs=32 2023-07-22 18:11:16,603 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39251,DS-73ccb354-3fc7-4c94-8af0-c23432cafde2,DISK] 2023-07-22 18:11:16,604 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32801,DS-a07f8497-4121-4834-b6d4-6fdd9a2861b6,DISK] 2023-07-22 18:11:16,611 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46673,DS-88e59944-cd8d-42aa-8795-e5b7a0303e3d,DISK] 2023-07-22 18:11:16,614 DEBUG [jenkins-hbase4:40289] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-22 18:11:16,641 DEBUG [jenkins-hbase4:40289] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:16,644 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46673,DS-88e59944-cd8d-42aa-8795-e5b7a0303e3d,DISK] 2023-07-22 18:11:16,644 DEBUG [jenkins-hbase4:40289] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:16,644 DEBUG [jenkins-hbase4:40289] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:16,644 DEBUG [jenkins-hbase4:40289] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:16,645 DEBUG [jenkins-hbase4:40289] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:16,646 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39251,DS-73ccb354-3fc7-4c94-8af0-c23432cafde2,DISK] 2023-07-22 18:11:16,648 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32801,DS-a07f8497-4121-4834-b6d4-6fdd9a2861b6,DISK] 2023-07-22 18:11:16,649 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46673,DS-88e59944-cd8d-42aa-8795-e5b7a0303e3d,DISK] 2023-07-22 18:11:16,651 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33411,1690049473844, state=OPENING 2023-07-22 18:11:16,656 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32801,DS-a07f8497-4121-4834-b6d4-6fdd9a2861b6,DISK] 2023-07-22 18:11:16,656 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39251,DS-73ccb354-3fc7-4c94-8af0-c23432cafde2,DISK] 2023-07-22 18:11:16,663 INFO [RS:1;jenkins-hbase4:38977] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/WALs/jenkins-hbase4.apache.org,38977,1690049474061/jenkins-hbase4.apache.org%2C38977%2C1690049474061.1690049476557 2023-07-22 18:11:16,664 DEBUG [RS:1;jenkins-hbase4:38977] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39251,DS-73ccb354-3fc7-4c94-8af0-c23432cafde2,DISK], DatanodeInfoWithStorage[127.0.0.1:46673,DS-88e59944-cd8d-42aa-8795-e5b7a0303e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:32801,DS-a07f8497-4121-4834-b6d4-6fdd9a2861b6,DISK]] 2023-07-22 18:11:16,664 DEBUG [PEWorker-2] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-22 18:11:16,666 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:16,669 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-22 18:11:16,673 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:16,675 INFO [RS:0;jenkins-hbase4:33411] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/WALs/jenkins-hbase4.apache.org,33411,1690049473844/jenkins-hbase4.apache.org%2C33411%2C1690049473844.1690049476565 2023-07-22 18:11:16,676 INFO [RS:2;jenkins-hbase4:38507] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/WALs/jenkins-hbase4.apache.org,38507,1690049474291/jenkins-hbase4.apache.org%2C38507%2C1690049474291.1690049476565 2023-07-22 18:11:16,678 DEBUG [RS:0;jenkins-hbase4:33411] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32801,DS-a07f8497-4121-4834-b6d4-6fdd9a2861b6,DISK], DatanodeInfoWithStorage[127.0.0.1:39251,DS-73ccb354-3fc7-4c94-8af0-c23432cafde2,DISK], DatanodeInfoWithStorage[127.0.0.1:46673,DS-88e59944-cd8d-42aa-8795-e5b7a0303e3d,DISK]] 2023-07-22 18:11:16,682 DEBUG [RS:2;jenkins-hbase4:38507] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39251,DS-73ccb354-3fc7-4c94-8af0-c23432cafde2,DISK], DatanodeInfoWithStorage[127.0.0.1:32801,DS-a07f8497-4121-4834-b6d4-6fdd9a2861b6,DISK], DatanodeInfoWithStorage[127.0.0.1:46673,DS-88e59944-cd8d-42aa-8795-e5b7a0303e3d,DISK]] 2023-07-22 18:11:16,873 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:16,875 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 18:11:16,879 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37252, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 18:11:16,897 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-22 18:11:16,897 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 18:11:16,901 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33411%2C1690049473844.meta, suffix=.meta, logDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/WALs/jenkins-hbase4.apache.org,33411,1690049473844, archiveDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/oldWALs, maxLogs=32 2023-07-22 18:11:16,927 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39251,DS-73ccb354-3fc7-4c94-8af0-c23432cafde2,DISK] 2023-07-22 18:11:16,929 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46673,DS-88e59944-cd8d-42aa-8795-e5b7a0303e3d,DISK] 2023-07-22 18:11:16,938 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32801,DS-a07f8497-4121-4834-b6d4-6fdd9a2861b6,DISK] 2023-07-22 18:11:16,949 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/WALs/jenkins-hbase4.apache.org,33411,1690049473844/jenkins-hbase4.apache.org%2C33411%2C1690049473844.meta.1690049476903.meta 2023-07-22 18:11:16,951 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:39251,DS-73ccb354-3fc7-4c94-8af0-c23432cafde2,DISK], DatanodeInfoWithStorage[127.0.0.1:46673,DS-88e59944-cd8d-42aa-8795-e5b7a0303e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:32801,DS-a07f8497-4121-4834-b6d4-6fdd9a2861b6,DISK]] 2023-07-22 18:11:16,951 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:16,954 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-22 18:11:16,957 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-22 18:11:16,960 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-22 18:11:16,966 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-22 18:11:16,966 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:16,966 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-22 18:11:16,966 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-22 18:11:16,971 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-22 18:11:16,973 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/info 2023-07-22 18:11:16,973 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/info 2023-07-22 18:11:16,974 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-22 18:11:16,975 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:16,975 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-22 18:11:16,977 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/rep_barrier 2023-07-22 18:11:16,977 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/rep_barrier 2023-07-22 18:11:16,977 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-22 18:11:16,978 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:16,979 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-22 18:11:16,980 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/table 2023-07-22 18:11:16,980 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/table 2023-07-22 18:11:16,981 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-22 18:11:16,981 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:16,987 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740 2023-07-22 18:11:16,991 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740 2023-07-22 18:11:16,995 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-22 18:11:16,998 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-22 18:11:16,999 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11424901440, jitterRate=0.0640268623828888}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-22 18:11:17,000 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-22 18:11:17,007 WARN [ReadOnlyZKClient-127.0.0.1:62144@0x20eaf84c] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-22 18:11:17,023 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690049476864 2023-07-22 18:11:17,048 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40289,1690049471773] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 18:11:17,055 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37266, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 18:11:17,057 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33411] ipc.CallRunner(144): callId: 1 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:37266 deadline: 1690049537055, exception=org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region hbase:meta,,1 is opening on jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:17,061 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-22 18:11:17,062 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33411,1690049473844, state=OPEN 2023-07-22 18:11:17,064 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-22 18:11:17,067 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-22 18:11:17,067 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-22 18:11:17,073 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-22 18:11:17,073 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33411,1690049473844 in 394 msec 2023-07-22 18:11:17,080 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-22 18:11:17,081 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 638 msec 2023-07-22 18:11:17,097 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 1.3750 sec 2023-07-22 18:11:17,098 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690049477097, completionTime=-1 2023-07-22 18:11:17,098 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-22 18:11:17,098 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-22 18:11:17,177 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-22 18:11:17,177 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690049537177 2023-07-22 18:11:17,177 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690049597177 2023-07-22 18:11:17,177 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 78 msec 2023-07-22 18:11:17,208 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40289,1690049471773-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:17,208 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40289,1690049471773-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:17,208 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40289,1690049471773-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:17,211 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:40289, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:17,211 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:17,220 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-22 18:11:17,236 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-22 18:11:17,238 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:17,251 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-22 18:11:17,254 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 18:11:17,258 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 18:11:17,284 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/hbase/namespace/fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:17,287 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/hbase/namespace/fe5e9f07ec9c7007b36085471b5cd477 empty. 2023-07-22 18:11:17,287 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/hbase/namespace/fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:17,288 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-22 18:11:17,328 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:17,331 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => fe5e9f07ec9c7007b36085471b5cd477, NAME => 'hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:17,350 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:17,350 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing fe5e9f07ec9c7007b36085471b5cd477, disabling compactions & flushes 2023-07-22 18:11:17,350 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. 2023-07-22 18:11:17,350 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. 2023-07-22 18:11:17,350 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. after waiting 0 ms 2023-07-22 18:11:17,351 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. 2023-07-22 18:11:17,351 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. 2023-07-22 18:11:17,351 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for fe5e9f07ec9c7007b36085471b5cd477: 2023-07-22 18:11:17,355 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 18:11:17,372 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690049477358"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049477358"}]},"ts":"1690049477358"} 2023-07-22 18:11:17,403 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 18:11:17,406 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 18:11:17,410 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049477406"}]},"ts":"1690049477406"} 2023-07-22 18:11:17,415 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-22 18:11:17,420 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:17,421 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:17,421 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:17,421 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:17,421 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:17,423 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=fe5e9f07ec9c7007b36085471b5cd477, ASSIGN}] 2023-07-22 18:11:17,426 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=fe5e9f07ec9c7007b36085471b5cd477, ASSIGN 2023-07-22 18:11:17,427 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=fe5e9f07ec9c7007b36085471b5cd477, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38977,1690049474061; forceNewPlan=false, retain=false 2023-07-22 18:11:17,578 INFO [jenkins-hbase4:40289] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 18:11:17,580 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=fe5e9f07ec9c7007b36085471b5cd477, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:17,580 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690049477580"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049477580"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049477580"}]},"ts":"1690049477580"} 2023-07-22 18:11:17,586 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure fe5e9f07ec9c7007b36085471b5cd477, server=jenkins-hbase4.apache.org,38977,1690049474061}] 2023-07-22 18:11:17,740 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:17,741 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 18:11:17,744 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49710, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 18:11:17,750 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. 2023-07-22 18:11:17,750 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fe5e9f07ec9c7007b36085471b5cd477, NAME => 'hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:17,751 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:17,751 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:17,752 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:17,752 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:17,754 INFO [StoreOpener-fe5e9f07ec9c7007b36085471b5cd477-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:17,756 DEBUG [StoreOpener-fe5e9f07ec9c7007b36085471b5cd477-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/namespace/fe5e9f07ec9c7007b36085471b5cd477/info 2023-07-22 18:11:17,756 DEBUG [StoreOpener-fe5e9f07ec9c7007b36085471b5cd477-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/namespace/fe5e9f07ec9c7007b36085471b5cd477/info 2023-07-22 18:11:17,757 INFO [StoreOpener-fe5e9f07ec9c7007b36085471b5cd477-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fe5e9f07ec9c7007b36085471b5cd477 columnFamilyName info 2023-07-22 18:11:17,758 INFO [StoreOpener-fe5e9f07ec9c7007b36085471b5cd477-1] regionserver.HStore(310): Store=fe5e9f07ec9c7007b36085471b5cd477/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:17,759 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/namespace/fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:17,760 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/namespace/fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:17,764 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:17,768 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/namespace/fe5e9f07ec9c7007b36085471b5cd477/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:17,769 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fe5e9f07ec9c7007b36085471b5cd477; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9818264320, jitterRate=-0.08560287952423096}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:17,769 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fe5e9f07ec9c7007b36085471b5cd477: 2023-07-22 18:11:17,771 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477., pid=6, masterSystemTime=1690049477740 2023-07-22 18:11:17,776 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. 2023-07-22 18:11:17,776 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. 2023-07-22 18:11:17,778 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=fe5e9f07ec9c7007b36085471b5cd477, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:17,778 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690049477777"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049477777"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049477777"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049477777"}]},"ts":"1690049477777"} 2023-07-22 18:11:17,785 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-22 18:11:17,785 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure fe5e9f07ec9c7007b36085471b5cd477, server=jenkins-hbase4.apache.org,38977,1690049474061 in 195 msec 2023-07-22 18:11:17,789 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-22 18:11:17,790 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=fe5e9f07ec9c7007b36085471b5cd477, ASSIGN in 362 msec 2023-07-22 18:11:17,791 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 18:11:17,791 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049477791"}]},"ts":"1690049477791"} 2023-07-22 18:11:17,794 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-22 18:11:17,797 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 18:11:17,800 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 559 msec 2023-07-22 18:11:17,854 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-22 18:11:17,856 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-22 18:11:17,856 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:17,882 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 18:11:17,885 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49714, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 18:11:17,903 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-22 18:11:17,924 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 18:11:17,930 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 38 msec 2023-07-22 18:11:17,936 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-22 18:11:17,947 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 18:11:17,953 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 16 msec 2023-07-22 18:11:17,963 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-22 18:11:17,966 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-22 18:11:17,966 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.452sec 2023-07-22 18:11:17,968 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-22 18:11:17,970 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-22 18:11:17,970 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-22 18:11:17,971 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40289,1690049471773-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-22 18:11:17,972 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,40289,1690049471773-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-22 18:11:17,980 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-22 18:11:18,018 DEBUG [Listener at localhost/37829] zookeeper.ReadOnlyZKClient(139): Connect 0x702c0ae8 to 127.0.0.1:62144 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:18,024 DEBUG [Listener at localhost/37829] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3147adb2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:18,040 DEBUG [hconnection-0x5a2c0b37-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 18:11:18,055 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37268, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 18:11:18,067 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,40289,1690049471773 2023-07-22 18:11:18,069 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:18,088 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40289,1690049471773] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:18,091 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40289,1690049471773] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-22 18:11:18,093 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 18:11:18,095 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 18:11:18,098 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6 2023-07-22 18:11:18,099 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6 empty. 2023-07-22 18:11:18,101 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6 2023-07-22 18:11:18,102 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-22 18:11:18,126 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:18,128 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => ca604f964db2e93cbe231535895107a6, NAME => 'hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:18,152 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:18,152 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing ca604f964db2e93cbe231535895107a6, disabling compactions & flushes 2023-07-22 18:11:18,152 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. 2023-07-22 18:11:18,152 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. 2023-07-22 18:11:18,152 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. after waiting 0 ms 2023-07-22 18:11:18,152 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. 2023-07-22 18:11:18,152 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. 2023-07-22 18:11:18,152 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for ca604f964db2e93cbe231535895107a6: 2023-07-22 18:11:18,157 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 18:11:18,159 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690049478158"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049478158"}]},"ts":"1690049478158"} 2023-07-22 18:11:18,161 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 18:11:18,163 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 18:11:18,163 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049478163"}]},"ts":"1690049478163"} 2023-07-22 18:11:18,166 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-22 18:11:18,170 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:18,170 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:18,170 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:18,170 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:18,170 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:18,171 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=ca604f964db2e93cbe231535895107a6, ASSIGN}] 2023-07-22 18:11:18,173 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=ca604f964db2e93cbe231535895107a6, ASSIGN 2023-07-22 18:11:18,175 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=ca604f964db2e93cbe231535895107a6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33411,1690049473844; forceNewPlan=false, retain=false 2023-07-22 18:11:18,325 INFO [jenkins-hbase4:40289] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 18:11:18,327 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=ca604f964db2e93cbe231535895107a6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:18,327 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690049478327"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049478327"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049478327"}]},"ts":"1690049478327"} 2023-07-22 18:11:18,331 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure ca604f964db2e93cbe231535895107a6, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:18,491 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. 2023-07-22 18:11:18,492 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ca604f964db2e93cbe231535895107a6, NAME => 'hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:18,492 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-22 18:11:18,492 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. service=MultiRowMutationService 2023-07-22 18:11:18,493 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-22 18:11:18,493 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup ca604f964db2e93cbe231535895107a6 2023-07-22 18:11:18,493 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:18,493 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ca604f964db2e93cbe231535895107a6 2023-07-22 18:11:18,494 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ca604f964db2e93cbe231535895107a6 2023-07-22 18:11:18,497 INFO [StoreOpener-ca604f964db2e93cbe231535895107a6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region ca604f964db2e93cbe231535895107a6 2023-07-22 18:11:18,500 DEBUG [StoreOpener-ca604f964db2e93cbe231535895107a6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6/m 2023-07-22 18:11:18,500 DEBUG [StoreOpener-ca604f964db2e93cbe231535895107a6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6/m 2023-07-22 18:11:18,500 INFO [StoreOpener-ca604f964db2e93cbe231535895107a6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ca604f964db2e93cbe231535895107a6 columnFamilyName m 2023-07-22 18:11:18,501 INFO [StoreOpener-ca604f964db2e93cbe231535895107a6-1] regionserver.HStore(310): Store=ca604f964db2e93cbe231535895107a6/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:18,509 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6 2023-07-22 18:11:18,512 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6 2023-07-22 18:11:18,520 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ca604f964db2e93cbe231535895107a6 2023-07-22 18:11:18,529 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:18,530 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ca604f964db2e93cbe231535895107a6; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@1a06bcc1, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:18,530 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ca604f964db2e93cbe231535895107a6: 2023-07-22 18:11:18,532 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6., pid=11, masterSystemTime=1690049478485 2023-07-22 18:11:18,536 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. 2023-07-22 18:11:18,536 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. 2023-07-22 18:11:18,537 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=ca604f964db2e93cbe231535895107a6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:18,539 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690049478537"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049478537"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049478537"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049478537"}]},"ts":"1690049478537"} 2023-07-22 18:11:18,548 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-07-22 18:11:18,548 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure ca604f964db2e93cbe231535895107a6, server=jenkins-hbase4.apache.org,33411,1690049473844 in 211 msec 2023-07-22 18:11:18,555 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-07-22 18:11:18,556 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=ca604f964db2e93cbe231535895107a6, ASSIGN in 377 msec 2023-07-22 18:11:18,559 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 18:11:18,559 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049478559"}]},"ts":"1690049478559"} 2023-07-22 18:11:18,562 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-22 18:11:18,566 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 18:11:18,569 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 478 msec 2023-07-22 18:11:18,610 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40289,1690049471773] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-22 18:11:18,610 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40289,1690049471773] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-22 18:11:18,682 DEBUG [Listener at localhost/37829] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-22 18:11:18,711 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34802, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-22 18:11:18,725 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:18,725 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40289,1690049471773] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:18,730 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-22 18:11:18,730 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:18,730 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40289,1690049471773] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-22 18:11:18,731 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-22 18:11:18,738 DEBUG [Listener at localhost/37829] zookeeper.ReadOnlyZKClient(139): Connect 0x43497e15 to 127.0.0.1:62144 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:18,738 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,40289,1690049471773] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-22 18:11:18,799 DEBUG [Listener at localhost/37829] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2be10837, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:18,799 INFO [Listener at localhost/37829] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:62144 2023-07-22 18:11:18,804 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 18:11:18,811 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1018e3ae4b0000a connected 2023-07-22 18:11:18,839 INFO [Listener at localhost/37829] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=424, OpenFileDescriptor=680, MaxFileDescriptor=60000, SystemLoadAverage=389, ProcessCount=174, AvailableMemoryMB=6837 2023-07-22 18:11:18,841 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-22 18:11:18,868 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:18,870 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:18,916 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-22 18:11:18,955 INFO [Listener at localhost/37829] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 18:11:18,955 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:18,956 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:18,956 INFO [Listener at localhost/37829] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 18:11:18,956 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:18,956 INFO [Listener at localhost/37829] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 18:11:18,956 INFO [Listener at localhost/37829] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 18:11:18,976 INFO [Listener at localhost/37829] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45471 2023-07-22 18:11:18,977 INFO [Listener at localhost/37829] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 18:11:19,030 DEBUG [Listener at localhost/37829] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 18:11:19,033 INFO [Listener at localhost/37829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:19,035 INFO [Listener at localhost/37829] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:19,037 INFO [Listener at localhost/37829] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:45471 connecting to ZooKeeper ensemble=127.0.0.1:62144 2023-07-22 18:11:19,079 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:454710x0, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 18:11:19,080 DEBUG [Listener at localhost/37829] zookeeper.ZKUtil(162): regionserver:454710x0, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-22 18:11:19,081 DEBUG [Listener at localhost/37829] zookeeper.ZKUtil(162): regionserver:454710x0, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-22 18:11:19,082 DEBUG [Listener at localhost/37829] zookeeper.ZKUtil(164): regionserver:454710x0, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 18:11:19,136 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45471 2023-07-22 18:11:19,136 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:45471-0x1018e3ae4b0000b connected 2023-07-22 18:11:19,136 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45471 2023-07-22 18:11:19,140 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45471 2023-07-22 18:11:19,196 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45471 2023-07-22 18:11:19,197 DEBUG [Listener at localhost/37829] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45471 2023-07-22 18:11:19,206 INFO [Listener at localhost/37829] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 18:11:19,206 INFO [Listener at localhost/37829] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 18:11:19,206 INFO [Listener at localhost/37829] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 18:11:19,207 INFO [Listener at localhost/37829] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 18:11:19,207 INFO [Listener at localhost/37829] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 18:11:19,207 INFO [Listener at localhost/37829] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 18:11:19,207 INFO [Listener at localhost/37829] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 18:11:19,208 INFO [Listener at localhost/37829] http.HttpServer(1146): Jetty bound to port 36289 2023-07-22 18:11:19,208 INFO [Listener at localhost/37829] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 18:11:19,216 INFO [Listener at localhost/37829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:19,216 INFO [Listener at localhost/37829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2c02caab{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/hadoop.log.dir/,AVAILABLE} 2023-07-22 18:11:19,217 INFO [Listener at localhost/37829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:19,217 INFO [Listener at localhost/37829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6dd20c46{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 18:11:19,356 INFO [Listener at localhost/37829] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 18:11:19,357 INFO [Listener at localhost/37829] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 18:11:19,357 INFO [Listener at localhost/37829] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 18:11:19,357 INFO [Listener at localhost/37829] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-22 18:11:19,364 INFO [Listener at localhost/37829] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:19,365 INFO [Listener at localhost/37829] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@780935ef{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/java.io.tmpdir/jetty-0_0_0_0-36289-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3018330381483066328/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:19,367 INFO [Listener at localhost/37829] server.AbstractConnector(333): Started ServerConnector@6057e31f{HTTP/1.1, (http/1.1)}{0.0.0.0:36289} 2023-07-22 18:11:19,367 INFO [Listener at localhost/37829] server.Server(415): Started @13320ms 2023-07-22 18:11:19,371 INFO [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(951): ClusterId : 5cc371fe-a800-412b-aae5-b6a77a194597 2023-07-22 18:11:19,375 DEBUG [RS:3;jenkins-hbase4:45471] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 18:11:19,377 DEBUG [RS:3;jenkins-hbase4:45471] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 18:11:19,377 DEBUG [RS:3;jenkins-hbase4:45471] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 18:11:19,380 DEBUG [RS:3;jenkins-hbase4:45471] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 18:11:19,382 DEBUG [RS:3;jenkins-hbase4:45471] zookeeper.ReadOnlyZKClient(139): Connect 0x3baf0687 to 127.0.0.1:62144 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:19,389 DEBUG [RS:3;jenkins-hbase4:45471] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3568ef82, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:19,389 DEBUG [RS:3;jenkins-hbase4:45471] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@31576df7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 18:11:19,399 DEBUG [RS:3;jenkins-hbase4:45471] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:45471 2023-07-22 18:11:19,399 INFO [RS:3;jenkins-hbase4:45471] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 18:11:19,399 INFO [RS:3;jenkins-hbase4:45471] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 18:11:19,399 DEBUG [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 18:11:19,401 INFO [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,40289,1690049471773 with isa=jenkins-hbase4.apache.org/172.31.14.131:45471, startcode=1690049478954 2023-07-22 18:11:19,401 DEBUG [RS:3;jenkins-hbase4:45471] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 18:11:19,406 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52501, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 18:11:19,406 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=40289] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:19,406 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40289,1690049471773] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 18:11:19,407 DEBUG [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf 2023-07-22 18:11:19,407 DEBUG [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43335 2023-07-22 18:11:19,407 DEBUG [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=43673 2023-07-22 18:11:19,412 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:38977-0x1018e3ae4b00002, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:19,412 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:19,412 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:19,412 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:19,412 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40289,1690049471773] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:19,419 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,45471,1690049478954] 2023-07-22 18:11:19,419 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38977-0x1018e3ae4b00002, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:19,419 DEBUG [RS:3;jenkins-hbase4:45471] zookeeper.ZKUtil(162): regionserver:45471-0x1018e3ae4b0000b, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:19,419 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40289,1690049471773] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-22 18:11:19,419 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:19,420 WARN [RS:3;jenkins-hbase4:45471] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 18:11:19,419 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:19,420 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38977-0x1018e3ae4b00002, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:19,422 INFO [RS:3;jenkins-hbase4:45471] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 18:11:19,422 DEBUG [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/WALs/jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:19,425 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,40289,1690049471773] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-22 18:11:19,425 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:19,425 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38977-0x1018e3ae4b00002, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:19,425 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:19,426 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:19,426 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38977-0x1018e3ae4b00002, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:19,426 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:19,426 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:19,427 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:19,431 DEBUG [RS:3;jenkins-hbase4:45471] zookeeper.ZKUtil(162): regionserver:45471-0x1018e3ae4b0000b, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:19,432 DEBUG [RS:3;jenkins-hbase4:45471] zookeeper.ZKUtil(162): regionserver:45471-0x1018e3ae4b0000b, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:19,432 DEBUG [RS:3;jenkins-hbase4:45471] zookeeper.ZKUtil(162): regionserver:45471-0x1018e3ae4b0000b, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:19,433 DEBUG [RS:3;jenkins-hbase4:45471] zookeeper.ZKUtil(162): regionserver:45471-0x1018e3ae4b0000b, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:19,434 DEBUG [RS:3;jenkins-hbase4:45471] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 18:11:19,435 INFO [RS:3;jenkins-hbase4:45471] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 18:11:19,438 INFO [RS:3;jenkins-hbase4:45471] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 18:11:19,439 INFO [RS:3;jenkins-hbase4:45471] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 18:11:19,439 INFO [RS:3;jenkins-hbase4:45471] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:19,441 INFO [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 18:11:19,443 INFO [RS:3;jenkins-hbase4:45471] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:19,444 DEBUG [RS:3;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:19,444 DEBUG [RS:3;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:19,444 DEBUG [RS:3;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:19,444 DEBUG [RS:3;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:19,444 DEBUG [RS:3;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:19,444 DEBUG [RS:3;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 18:11:19,444 DEBUG [RS:3;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:19,444 DEBUG [RS:3;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:19,444 DEBUG [RS:3;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:19,444 DEBUG [RS:3;jenkins-hbase4:45471] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:19,448 INFO [RS:3;jenkins-hbase4:45471] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:19,448 INFO [RS:3;jenkins-hbase4:45471] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:19,448 INFO [RS:3;jenkins-hbase4:45471] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:19,461 INFO [RS:3;jenkins-hbase4:45471] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 18:11:19,461 INFO [RS:3;jenkins-hbase4:45471] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45471,1690049478954-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:19,472 INFO [RS:3;jenkins-hbase4:45471] regionserver.Replication(203): jenkins-hbase4.apache.org,45471,1690049478954 started 2023-07-22 18:11:19,472 INFO [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,45471,1690049478954, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:45471, sessionid=0x1018e3ae4b0000b 2023-07-22 18:11:19,472 DEBUG [RS:3;jenkins-hbase4:45471] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 18:11:19,472 DEBUG [RS:3;jenkins-hbase4:45471] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:19,472 DEBUG [RS:3;jenkins-hbase4:45471] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45471,1690049478954' 2023-07-22 18:11:19,472 DEBUG [RS:3;jenkins-hbase4:45471] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 18:11:19,473 DEBUG [RS:3;jenkins-hbase4:45471] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 18:11:19,473 DEBUG [RS:3;jenkins-hbase4:45471] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 18:11:19,473 DEBUG [RS:3;jenkins-hbase4:45471] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 18:11:19,473 DEBUG [RS:3;jenkins-hbase4:45471] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:19,473 DEBUG [RS:3;jenkins-hbase4:45471] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,45471,1690049478954' 2023-07-22 18:11:19,473 DEBUG [RS:3;jenkins-hbase4:45471] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 18:11:19,474 DEBUG [RS:3;jenkins-hbase4:45471] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 18:11:19,474 DEBUG [RS:3;jenkins-hbase4:45471] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 18:11:19,474 INFO [RS:3;jenkins-hbase4:45471] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-22 18:11:19,474 INFO [RS:3;jenkins-hbase4:45471] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-22 18:11:19,479 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:19,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:19,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:19,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:19,489 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:19,491 DEBUG [hconnection-0x744a8a1-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 18:11:19,495 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:37284, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 18:11:19,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:19,503 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:19,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40289] to rsgroup master 2023-07-22 18:11:19,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:19,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:34802 deadline: 1690050679511, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. 2023-07-22 18:11:19,513 WARN [Listener at localhost/37829] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:19,515 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:19,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:19,517 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:19,517 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:45471], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:19,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:19,524 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:19,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:19,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:19,527 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:19,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:19,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:19,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:19,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:19,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:19,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:19,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:19,548 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507] to rsgroup Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:19,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:19,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:19,552 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:19,553 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:19,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(238): Moving server region ca604f964db2e93cbe231535895107a6, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:19,558 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=ca604f964db2e93cbe231535895107a6, REOPEN/MOVE 2023-07-22 18:11:19,559 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=ca604f964db2e93cbe231535895107a6, REOPEN/MOVE 2023-07-22 18:11:19,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:19,561 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=ca604f964db2e93cbe231535895107a6, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:19,561 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690049479561"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049479561"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049479561"}]},"ts":"1690049479561"} 2023-07-22 18:11:19,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-22 18:11:19,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-22 18:11:19,562 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-22 18:11:19,563 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33411,1690049473844, state=CLOSING 2023-07-22 18:11:19,564 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure ca604f964db2e93cbe231535895107a6, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:19,565 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-22 18:11:19,565 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-22 18:11:19,565 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:19,573 DEBUG [PEWorker-5] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure ca604f964db2e93cbe231535895107a6, server=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:19,578 INFO [RS:3;jenkins-hbase4:45471] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45471%2C1690049478954, suffix=, logDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/WALs/jenkins-hbase4.apache.org,45471,1690049478954, archiveDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/oldWALs, maxLogs=32 2023-07-22 18:11:19,602 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46673,DS-88e59944-cd8d-42aa-8795-e5b7a0303e3d,DISK] 2023-07-22 18:11:19,605 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39251,DS-73ccb354-3fc7-4c94-8af0-c23432cafde2,DISK] 2023-07-22 18:11:19,605 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32801,DS-a07f8497-4121-4834-b6d4-6fdd9a2861b6,DISK] 2023-07-22 18:11:19,609 INFO [RS:3;jenkins-hbase4:45471] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/WALs/jenkins-hbase4.apache.org,45471,1690049478954/jenkins-hbase4.apache.org%2C45471%2C1690049478954.1690049479579 2023-07-22 18:11:19,609 DEBUG [RS:3;jenkins-hbase4:45471] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46673,DS-88e59944-cd8d-42aa-8795-e5b7a0303e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:39251,DS-73ccb354-3fc7-4c94-8af0-c23432cafde2,DISK], DatanodeInfoWithStorage[127.0.0.1:32801,DS-a07f8497-4121-4834-b6d4-6fdd9a2861b6,DISK]] 2023-07-22 18:11:19,727 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-22 18:11:19,728 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-22 18:11:19,728 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-22 18:11:19,728 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-22 18:11:19,728 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-22 18:11:19,728 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-22 18:11:19,729 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.85 KB heapSize=5.58 KB 2023-07-22 18:11:19,811 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.67 KB at sequenceid=15 (bloomFilter=false), to=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/.tmp/info/60a8c08b751645729d685df8c7db9c6c 2023-07-22 18:11:19,899 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=15 (bloomFilter=false), to=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/.tmp/table/9d3381acca394b088304fe2d8e80aa2d 2023-07-22 18:11:19,908 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/.tmp/info/60a8c08b751645729d685df8c7db9c6c as hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/info/60a8c08b751645729d685df8c7db9c6c 2023-07-22 18:11:19,918 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/info/60a8c08b751645729d685df8c7db9c6c, entries=21, sequenceid=15, filesize=7.1 K 2023-07-22 18:11:19,921 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/.tmp/table/9d3381acca394b088304fe2d8e80aa2d as hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/table/9d3381acca394b088304fe2d8e80aa2d 2023-07-22 18:11:19,929 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/table/9d3381acca394b088304fe2d8e80aa2d, entries=4, sequenceid=15, filesize=4.8 K 2023-07-22 18:11:19,932 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.85 KB/2916, heapSize ~5.30 KB/5424, currentSize=0 B/0 for 1588230740 in 203ms, sequenceid=15, compaction requested=false 2023-07-22 18:11:19,933 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-22 18:11:19,948 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/recovered.edits/18.seqid, newMaxSeqId=18, maxSeqId=1 2023-07-22 18:11:19,948 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 18:11:19,949 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-22 18:11:19,949 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-22 18:11:19,949 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,45471,1690049478954 record at close sequenceid=15 2023-07-22 18:11:19,952 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-22 18:11:19,952 WARN [PEWorker-2] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-22 18:11:19,955 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-22 18:11:19,956 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33411,1690049473844 in 387 msec 2023-07-22 18:11:19,957 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45471,1690049478954; forceNewPlan=false, retain=false 2023-07-22 18:11:20,107 INFO [jenkins-hbase4:40289] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 18:11:20,107 INFO [PEWorker-4] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45471,1690049478954, state=OPENING 2023-07-22 18:11:20,109 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-22 18:11:20,109 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-22 18:11:20,109 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:20,263 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:20,263 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 18:11:20,267 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45878, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 18:11:20,272 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-22 18:11:20,272 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 18:11:20,275 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45471%2C1690049478954.meta, suffix=.meta, logDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/WALs/jenkins-hbase4.apache.org,45471,1690049478954, archiveDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/oldWALs, maxLogs=32 2023-07-22 18:11:20,300 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32801,DS-a07f8497-4121-4834-b6d4-6fdd9a2861b6,DISK] 2023-07-22 18:11:20,301 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:39251,DS-73ccb354-3fc7-4c94-8af0-c23432cafde2,DISK] 2023-07-22 18:11:20,305 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46673,DS-88e59944-cd8d-42aa-8795-e5b7a0303e3d,DISK] 2023-07-22 18:11:20,308 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/WALs/jenkins-hbase4.apache.org,45471,1690049478954/jenkins-hbase4.apache.org%2C45471%2C1690049478954.meta.1690049480276.meta 2023-07-22 18:11:20,309 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32801,DS-a07f8497-4121-4834-b6d4-6fdd9a2861b6,DISK], DatanodeInfoWithStorage[127.0.0.1:39251,DS-73ccb354-3fc7-4c94-8af0-c23432cafde2,DISK], DatanodeInfoWithStorage[127.0.0.1:46673,DS-88e59944-cd8d-42aa-8795-e5b7a0303e3d,DISK]] 2023-07-22 18:11:20,309 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:20,309 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-22 18:11:20,309 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-22 18:11:20,309 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-22 18:11:20,310 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-22 18:11:20,310 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:20,310 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-22 18:11:20,310 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-22 18:11:20,312 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-22 18:11:20,313 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/info 2023-07-22 18:11:20,314 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/info 2023-07-22 18:11:20,314 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-22 18:11:20,336 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/info/60a8c08b751645729d685df8c7db9c6c 2023-07-22 18:11:20,337 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:20,337 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-22 18:11:20,338 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/rep_barrier 2023-07-22 18:11:20,338 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/rep_barrier 2023-07-22 18:11:20,339 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-22 18:11:20,340 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:20,340 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-22 18:11:20,341 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/table 2023-07-22 18:11:20,341 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/table 2023-07-22 18:11:20,341 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-22 18:11:20,351 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/table/9d3381acca394b088304fe2d8e80aa2d 2023-07-22 18:11:20,351 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:20,353 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740 2023-07-22 18:11:20,355 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740 2023-07-22 18:11:20,358 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-22 18:11:20,360 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-22 18:11:20,362 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=19; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10373180320, jitterRate=-0.03392229974269867}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-22 18:11:20,362 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-22 18:11:20,363 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=16, masterSystemTime=1690049480263 2023-07-22 18:11:20,367 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-22 18:11:20,368 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-22 18:11:20,369 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,45471,1690049478954, state=OPEN 2023-07-22 18:11:20,371 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-22 18:11:20,372 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-22 18:11:20,375 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-22 18:11:20,375 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,45471,1690049478954 in 262 msec 2023-07-22 18:11:20,377 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 815 msec 2023-07-22 18:11:20,525 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close ca604f964db2e93cbe231535895107a6 2023-07-22 18:11:20,526 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ca604f964db2e93cbe231535895107a6, disabling compactions & flushes 2023-07-22 18:11:20,526 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. 2023-07-22 18:11:20,526 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. 2023-07-22 18:11:20,526 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. after waiting 0 ms 2023-07-22 18:11:20,526 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. 2023-07-22 18:11:20,526 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing ca604f964db2e93cbe231535895107a6 1/1 column families, dataSize=1.38 KB heapSize=2.37 KB 2023-07-22 18:11:20,563 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-22 18:11:20,567 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6/.tmp/m/d405d815f9af43289bf4bf07824ff6c9 2023-07-22 18:11:20,584 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6/.tmp/m/d405d815f9af43289bf4bf07824ff6c9 as hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6/m/d405d815f9af43289bf4bf07824ff6c9 2023-07-22 18:11:20,596 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6/m/d405d815f9af43289bf4bf07824ff6c9, entries=3, sequenceid=9, filesize=5.2 K 2023-07-22 18:11:20,600 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1418, heapSize ~2.35 KB/2408, currentSize=0 B/0 for ca604f964db2e93cbe231535895107a6 in 74ms, sequenceid=9, compaction requested=false 2023-07-22 18:11:20,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-22 18:11:20,609 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-22 18:11:20,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 18:11:20,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. 2023-07-22 18:11:20,611 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ca604f964db2e93cbe231535895107a6: 2023-07-22 18:11:20,611 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding ca604f964db2e93cbe231535895107a6 move to jenkins-hbase4.apache.org,45471,1690049478954 record at close sequenceid=9 2023-07-22 18:11:20,614 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed ca604f964db2e93cbe231535895107a6 2023-07-22 18:11:20,615 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=ca604f964db2e93cbe231535895107a6, regionState=CLOSED 2023-07-22 18:11:20,615 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690049480615"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049480615"}]},"ts":"1690049480615"} 2023-07-22 18:11:20,617 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33411] ipc.CallRunner(144): callId: 41 service: ClientService methodName: Mutate size: 213 connection: 172.31.14.131:37266 deadline: 1690049540616, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45471 startCode=1690049478954. As of locationSeqNum=15. 2023-07-22 18:11:20,719 DEBUG [PEWorker-4] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 18:11:20,720 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59768, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 18:11:20,728 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-22 18:11:20,728 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; CloseRegionProcedure ca604f964db2e93cbe231535895107a6, server=jenkins-hbase4.apache.org,33411,1690049473844 in 1.1600 sec 2023-07-22 18:11:20,729 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=ca604f964db2e93cbe231535895107a6, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45471,1690049478954; forceNewPlan=false, retain=false 2023-07-22 18:11:20,879 INFO [jenkins-hbase4:40289] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 18:11:20,880 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=ca604f964db2e93cbe231535895107a6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:20,880 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690049480879"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049480879"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049480879"}]},"ts":"1690049480879"} 2023-07-22 18:11:20,883 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=12, state=RUNNABLE; OpenRegionProcedure ca604f964db2e93cbe231535895107a6, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:21,047 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. 2023-07-22 18:11:21,047 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => ca604f964db2e93cbe231535895107a6, NAME => 'hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:21,047 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-22 18:11:21,047 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. service=MultiRowMutationService 2023-07-22 18:11:21,047 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-22 18:11:21,048 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup ca604f964db2e93cbe231535895107a6 2023-07-22 18:11:21,048 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:21,048 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for ca604f964db2e93cbe231535895107a6 2023-07-22 18:11:21,048 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for ca604f964db2e93cbe231535895107a6 2023-07-22 18:11:21,052 INFO [StoreOpener-ca604f964db2e93cbe231535895107a6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region ca604f964db2e93cbe231535895107a6 2023-07-22 18:11:21,054 DEBUG [StoreOpener-ca604f964db2e93cbe231535895107a6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6/m 2023-07-22 18:11:21,054 DEBUG [StoreOpener-ca604f964db2e93cbe231535895107a6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6/m 2023-07-22 18:11:21,055 INFO [StoreOpener-ca604f964db2e93cbe231535895107a6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region ca604f964db2e93cbe231535895107a6 columnFamilyName m 2023-07-22 18:11:21,076 DEBUG [StoreOpener-ca604f964db2e93cbe231535895107a6-1] regionserver.HStore(539): loaded hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6/m/d405d815f9af43289bf4bf07824ff6c9 2023-07-22 18:11:21,076 INFO [StoreOpener-ca604f964db2e93cbe231535895107a6-1] regionserver.HStore(310): Store=ca604f964db2e93cbe231535895107a6/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:21,078 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6 2023-07-22 18:11:21,080 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6 2023-07-22 18:11:21,087 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for ca604f964db2e93cbe231535895107a6 2023-07-22 18:11:21,090 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened ca604f964db2e93cbe231535895107a6; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@602f64cf, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:21,090 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for ca604f964db2e93cbe231535895107a6: 2023-07-22 18:11:21,091 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6., pid=17, masterSystemTime=1690049481036 2023-07-22 18:11:21,094 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. 2023-07-22 18:11:21,094 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. 2023-07-22 18:11:21,095 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=12 updating hbase:meta row=ca604f964db2e93cbe231535895107a6, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:21,095 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690049481094"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049481094"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049481094"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049481094"}]},"ts":"1690049481094"} 2023-07-22 18:11:21,102 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=12 2023-07-22 18:11:21,102 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=12, state=SUCCESS; OpenRegionProcedure ca604f964db2e93cbe231535895107a6, server=jenkins-hbase4.apache.org,45471,1690049478954 in 215 msec 2023-07-22 18:11:21,104 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=ca604f964db2e93cbe231535895107a6, REOPEN/MOVE in 1.5450 sec 2023-07-22 18:11:21,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33411,1690049473844, jenkins-hbase4.apache.org,38507,1690049474291] are moved back to default 2023-07-22 18:11:21,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:21,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:21,566 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33411] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:37284 deadline: 1690049541566, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45471 startCode=1690049478954. As of locationSeqNum=9. 2023-07-22 18:11:21,669 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33411] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:37284 deadline: 1690049541669, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45471 startCode=1690049478954. As of locationSeqNum=15. 2023-07-22 18:11:21,772 DEBUG [hconnection-0x744a8a1-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 18:11:21,776 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59778, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 18:11:21,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:21,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:21,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:21,808 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:21,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:21,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-22 18:11:21,827 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 18:11:21,830 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33411] ipc.CallRunner(144): callId: 46 service: ClientService methodName: ExecService size: 622 connection: 172.31.14.131:37266 deadline: 1690049541829, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45471 startCode=1690049478954. As of locationSeqNum=9. 2023-07-22 18:11:21,833 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 18 2023-07-22 18:11:21,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-22 18:11:21,937 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:21,938 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:21,940 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:21,941 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:21,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-22 18:11:21,955 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 18:11:21,959 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-22 18:11:21,960 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-22 18:11:21,961 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 18:11:21,961 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-22 18:11:21,961 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-22 18:11:21,961 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-22 18:11:21,963 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:21,963 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:21,964 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f empty. 2023-07-22 18:11:21,965 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:21,965 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73 empty. 2023-07-22 18:11:21,966 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:21,966 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:21,966 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:21,967 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:21,967 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f empty. 2023-07-22 18:11:21,969 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c empty. 2023-07-22 18:11:21,970 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:21,976 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:21,976 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685 empty. 2023-07-22 18:11:21,985 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:21,988 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-22 18:11:22,070 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:22,072 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => f697d57573425a043e6da37a27af9c2f, NAME => 'Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:22,078 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 0b9647966cbbf3a3683d7d737d062e73, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:22,083 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 970de01dbb336fa7f28008075b40701f, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:22,144 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:22,145 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 0b9647966cbbf3a3683d7d737d062e73, disabling compactions & flushes 2023-07-22 18:11:22,145 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. 2023-07-22 18:11:22,145 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. 2023-07-22 18:11:22,145 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. after waiting 0 ms 2023-07-22 18:11:22,145 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. 2023-07-22 18:11:22,146 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. 2023-07-22 18:11:22,146 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 0b9647966cbbf3a3683d7d737d062e73: 2023-07-22 18:11:22,146 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 7afb95b2b8f66881030302e3e19e632c, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:22,148 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:22,149 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 970de01dbb336fa7f28008075b40701f, disabling compactions & flushes 2023-07-22 18:11:22,149 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. 2023-07-22 18:11:22,149 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. 2023-07-22 18:11:22,150 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. after waiting 0 ms 2023-07-22 18:11:22,150 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:22,151 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing f697d57573425a043e6da37a27af9c2f, disabling compactions & flushes 2023-07-22 18:11:22,150 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. 2023-07-22 18:11:22,151 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. 2023-07-22 18:11:22,151 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. 2023-07-22 18:11:22,151 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. 2023-07-22 18:11:22,151 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 970de01dbb336fa7f28008075b40701f: 2023-07-22 18:11:22,151 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. after waiting 0 ms 2023-07-22 18:11:22,151 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. 2023-07-22 18:11:22,151 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. 2023-07-22 18:11:22,152 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for f697d57573425a043e6da37a27af9c2f: 2023-07-22 18:11:22,152 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => a46810d87c2fafa3237a5a28bde8a685, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:22,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-22 18:11:22,190 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:22,191 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing a46810d87c2fafa3237a5a28bde8a685, disabling compactions & flushes 2023-07-22 18:11:22,191 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. 2023-07-22 18:11:22,191 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. 2023-07-22 18:11:22,191 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. after waiting 0 ms 2023-07-22 18:11:22,191 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. 2023-07-22 18:11:22,192 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. 2023-07-22 18:11:22,192 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for a46810d87c2fafa3237a5a28bde8a685: 2023-07-22 18:11:22,195 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:22,195 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 7afb95b2b8f66881030302e3e19e632c, disabling compactions & flushes 2023-07-22 18:11:22,195 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. 2023-07-22 18:11:22,195 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. 2023-07-22 18:11:22,195 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. after waiting 0 ms 2023-07-22 18:11:22,195 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. 2023-07-22 18:11:22,195 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. 2023-07-22 18:11:22,195 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 7afb95b2b8f66881030302e3e19e632c: 2023-07-22 18:11:22,199 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 18:11:22,201 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049482200"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049482200"}]},"ts":"1690049482200"} 2023-07-22 18:11:22,201 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690049481816.970de01dbb336fa7f28008075b40701f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049482200"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049482200"}]},"ts":"1690049482200"} 2023-07-22 18:11:22,201 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049482200"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049482200"}]},"ts":"1690049482200"} 2023-07-22 18:11:22,201 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049482200"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049482200"}]},"ts":"1690049482200"} 2023-07-22 18:11:22,202 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049482200"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049482200"}]},"ts":"1690049482200"} 2023-07-22 18:11:22,250 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-22 18:11:22,252 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 18:11:22,252 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049482252"}]},"ts":"1690049482252"} 2023-07-22 18:11:22,254 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-22 18:11:22,260 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:22,260 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:22,260 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:22,260 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:22,260 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f697d57573425a043e6da37a27af9c2f, ASSIGN}, {pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0b9647966cbbf3a3683d7d737d062e73, ASSIGN}, {pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=970de01dbb336fa7f28008075b40701f, ASSIGN}, {pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7afb95b2b8f66881030302e3e19e632c, ASSIGN}, {pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a46810d87c2fafa3237a5a28bde8a685, ASSIGN}] 2023-07-22 18:11:22,263 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0b9647966cbbf3a3683d7d737d062e73, ASSIGN 2023-07-22 18:11:22,263 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f697d57573425a043e6da37a27af9c2f, ASSIGN 2023-07-22 18:11:22,264 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=970de01dbb336fa7f28008075b40701f, ASSIGN 2023-07-22 18:11:22,264 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a46810d87c2fafa3237a5a28bde8a685, ASSIGN 2023-07-22 18:11:22,266 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7afb95b2b8f66881030302e3e19e632c, ASSIGN 2023-07-22 18:11:22,266 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0b9647966cbbf3a3683d7d737d062e73, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45471,1690049478954; forceNewPlan=false, retain=false 2023-07-22 18:11:22,266 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f697d57573425a043e6da37a27af9c2f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45471,1690049478954; forceNewPlan=false, retain=false 2023-07-22 18:11:22,266 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=970de01dbb336fa7f28008075b40701f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38977,1690049474061; forceNewPlan=false, retain=false 2023-07-22 18:11:22,267 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a46810d87c2fafa3237a5a28bde8a685, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38977,1690049474061; forceNewPlan=false, retain=false 2023-07-22 18:11:22,268 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7afb95b2b8f66881030302e3e19e632c, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45471,1690049478954; forceNewPlan=false, retain=false 2023-07-22 18:11:22,416 INFO [jenkins-hbase4:40289] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-22 18:11:22,420 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=0b9647966cbbf3a3683d7d737d062e73, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:22,420 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049482420"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049482420"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049482420"}]},"ts":"1690049482420"} 2023-07-22 18:11:22,421 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=f697d57573425a043e6da37a27af9c2f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:22,421 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=a46810d87c2fafa3237a5a28bde8a685, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:22,421 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049482420"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049482420"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049482420"}]},"ts":"1690049482420"} 2023-07-22 18:11:22,421 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049482421"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049482421"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049482421"}]},"ts":"1690049482421"} 2023-07-22 18:11:22,420 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=7afb95b2b8f66881030302e3e19e632c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:22,421 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049482420"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049482420"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049482420"}]},"ts":"1690049482420"} 2023-07-22 18:11:22,421 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=970de01dbb336fa7f28008075b40701f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:22,422 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690049481816.970de01dbb336fa7f28008075b40701f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049482421"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049482421"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049482421"}]},"ts":"1690049482421"} 2023-07-22 18:11:22,425 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=20, state=RUNNABLE; OpenRegionProcedure 0b9647966cbbf3a3683d7d737d062e73, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:22,428 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=19, state=RUNNABLE; OpenRegionProcedure f697d57573425a043e6da37a27af9c2f, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:22,430 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=26, ppid=23, state=RUNNABLE; OpenRegionProcedure a46810d87c2fafa3237a5a28bde8a685, server=jenkins-hbase4.apache.org,38977,1690049474061}] 2023-07-22 18:11:22,431 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=22, state=RUNNABLE; OpenRegionProcedure 7afb95b2b8f66881030302e3e19e632c, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:22,432 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=21, state=RUNNABLE; OpenRegionProcedure 970de01dbb336fa7f28008075b40701f, server=jenkins-hbase4.apache.org,38977,1690049474061}] 2023-07-22 18:11:22,454 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-22 18:11:22,580 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-22 18:11:22,583 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. 2023-07-22 18:11:22,583 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f697d57573425a043e6da37a27af9c2f, NAME => 'Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-22 18:11:22,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:22,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:22,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:22,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:22,590 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. 2023-07-22 18:11:22,590 INFO [StoreOpener-f697d57573425a043e6da37a27af9c2f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:22,590 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 970de01dbb336fa7f28008075b40701f, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-22 18:11:22,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:22,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:22,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:22,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:22,594 DEBUG [StoreOpener-f697d57573425a043e6da37a27af9c2f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f/f 2023-07-22 18:11:22,594 DEBUG [StoreOpener-f697d57573425a043e6da37a27af9c2f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f/f 2023-07-22 18:11:22,595 INFO [StoreOpener-f697d57573425a043e6da37a27af9c2f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f697d57573425a043e6da37a27af9c2f columnFamilyName f 2023-07-22 18:11:22,595 INFO [StoreOpener-970de01dbb336fa7f28008075b40701f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:22,595 INFO [StoreOpener-f697d57573425a043e6da37a27af9c2f-1] regionserver.HStore(310): Store=f697d57573425a043e6da37a27af9c2f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:22,599 DEBUG [StoreOpener-970de01dbb336fa7f28008075b40701f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f/f 2023-07-22 18:11:22,599 DEBUG [StoreOpener-970de01dbb336fa7f28008075b40701f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f/f 2023-07-22 18:11:22,599 INFO [StoreOpener-970de01dbb336fa7f28008075b40701f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 970de01dbb336fa7f28008075b40701f columnFamilyName f 2023-07-22 18:11:22,602 INFO [StoreOpener-970de01dbb336fa7f28008075b40701f-1] regionserver.HStore(310): Store=970de01dbb336fa7f28008075b40701f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:22,603 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:22,605 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:22,606 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:22,615 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:22,619 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:22,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:22,624 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 970de01dbb336fa7f28008075b40701f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9647193440, jitterRate=-0.10153509676456451}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:22,625 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 970de01dbb336fa7f28008075b40701f: 2023-07-22 18:11:22,626 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f., pid=28, masterSystemTime=1690049482584 2023-07-22 18:11:22,630 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:22,632 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. 2023-07-22 18:11:22,633 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. 2023-07-22 18:11:22,633 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. 2023-07-22 18:11:22,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a46810d87c2fafa3237a5a28bde8a685, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-22 18:11:22,633 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=970de01dbb336fa7f28008075b40701f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:22,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:22,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:22,633 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690049481816.970de01dbb336fa7f28008075b40701f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049482633"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049482633"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049482633"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049482633"}]},"ts":"1690049482633"} 2023-07-22 18:11:22,633 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:22,634 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:22,636 INFO [StoreOpener-a46810d87c2fafa3237a5a28bde8a685-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:22,640 DEBUG [StoreOpener-a46810d87c2fafa3237a5a28bde8a685-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685/f 2023-07-22 18:11:22,640 DEBUG [StoreOpener-a46810d87c2fafa3237a5a28bde8a685-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685/f 2023-07-22 18:11:22,643 INFO [StoreOpener-a46810d87c2fafa3237a5a28bde8a685-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a46810d87c2fafa3237a5a28bde8a685 columnFamilyName f 2023-07-22 18:11:22,644 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=21 2023-07-22 18:11:22,644 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=21, state=SUCCESS; OpenRegionProcedure 970de01dbb336fa7f28008075b40701f, server=jenkins-hbase4.apache.org,38977,1690049474061 in 206 msec 2023-07-22 18:11:22,645 INFO [StoreOpener-a46810d87c2fafa3237a5a28bde8a685-1] regionserver.HStore(310): Store=a46810d87c2fafa3237a5a28bde8a685/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:22,646 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=970de01dbb336fa7f28008075b40701f, ASSIGN in 384 msec 2023-07-22 18:11:22,653 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:22,654 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:22,662 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:22,663 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f697d57573425a043e6da37a27af9c2f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10089472640, jitterRate=-0.0603446364402771}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:22,664 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f697d57573425a043e6da37a27af9c2f: 2023-07-22 18:11:22,665 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:22,665 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f., pid=25, masterSystemTime=1690049482578 2023-07-22 18:11:22,671 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=f697d57573425a043e6da37a27af9c2f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:22,672 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049482671"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049482671"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049482671"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049482671"}]},"ts":"1690049482671"} 2023-07-22 18:11:22,675 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. 2023-07-22 18:11:22,675 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. 2023-07-22 18:11:22,675 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. 2023-07-22 18:11:22,675 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0b9647966cbbf3a3683d7d737d062e73, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-22 18:11:22,676 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:22,676 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:22,676 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:22,676 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:22,682 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=19 2023-07-22 18:11:22,684 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=19, state=SUCCESS; OpenRegionProcedure f697d57573425a043e6da37a27af9c2f, server=jenkins-hbase4.apache.org,45471,1690049478954 in 250 msec 2023-07-22 18:11:22,684 INFO [StoreOpener-0b9647966cbbf3a3683d7d737d062e73-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:22,687 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:22,687 DEBUG [StoreOpener-0b9647966cbbf3a3683d7d737d062e73-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73/f 2023-07-22 18:11:22,687 DEBUG [StoreOpener-0b9647966cbbf3a3683d7d737d062e73-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73/f 2023-07-22 18:11:22,688 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a46810d87c2fafa3237a5a28bde8a685; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11076133760, jitterRate=0.03154534101486206}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:22,688 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a46810d87c2fafa3237a5a28bde8a685: 2023-07-22 18:11:22,690 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685., pid=26, masterSystemTime=1690049482584 2023-07-22 18:11:22,695 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f697d57573425a043e6da37a27af9c2f, ASSIGN in 422 msec 2023-07-22 18:11:22,695 INFO [StoreOpener-0b9647966cbbf3a3683d7d737d062e73-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0b9647966cbbf3a3683d7d737d062e73 columnFamilyName f 2023-07-22 18:11:22,695 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. 2023-07-22 18:11:22,696 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. 2023-07-22 18:11:22,696 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=a46810d87c2fafa3237a5a28bde8a685, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:22,698 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049482696"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049482696"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049482696"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049482696"}]},"ts":"1690049482696"} 2023-07-22 18:11:22,699 INFO [StoreOpener-0b9647966cbbf3a3683d7d737d062e73-1] regionserver.HStore(310): Store=0b9647966cbbf3a3683d7d737d062e73/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:22,700 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:22,704 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:22,708 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-22 18:11:22,709 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testTableMoveTruncateAndDrop' 2023-07-22 18:11:22,709 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:22,710 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=23 2023-07-22 18:11:22,710 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=23, state=SUCCESS; OpenRegionProcedure a46810d87c2fafa3237a5a28bde8a685, server=jenkins-hbase4.apache.org,38977,1690049474061 in 276 msec 2023-07-22 18:11:22,716 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a46810d87c2fafa3237a5a28bde8a685, ASSIGN in 450 msec 2023-07-22 18:11:22,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:22,719 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0b9647966cbbf3a3683d7d737d062e73; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11911368800, jitterRate=0.10933266580104828}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:22,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0b9647966cbbf3a3683d7d737d062e73: 2023-07-22 18:11:22,721 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73., pid=24, masterSystemTime=1690049482578 2023-07-22 18:11:22,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. 2023-07-22 18:11:22,723 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. 2023-07-22 18:11:22,724 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. 2023-07-22 18:11:22,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7afb95b2b8f66881030302e3e19e632c, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-22 18:11:22,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:22,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:22,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:22,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:22,725 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=0b9647966cbbf3a3683d7d737d062e73, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:22,726 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049482725"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049482725"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049482725"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049482725"}]},"ts":"1690049482725"} 2023-07-22 18:11:22,726 INFO [StoreOpener-7afb95b2b8f66881030302e3e19e632c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:22,729 DEBUG [StoreOpener-7afb95b2b8f66881030302e3e19e632c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c/f 2023-07-22 18:11:22,729 DEBUG [StoreOpener-7afb95b2b8f66881030302e3e19e632c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c/f 2023-07-22 18:11:22,729 INFO [StoreOpener-7afb95b2b8f66881030302e3e19e632c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7afb95b2b8f66881030302e3e19e632c columnFamilyName f 2023-07-22 18:11:22,730 INFO [StoreOpener-7afb95b2b8f66881030302e3e19e632c-1] regionserver.HStore(310): Store=7afb95b2b8f66881030302e3e19e632c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:22,731 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:22,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:22,740 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=20 2023-07-22 18:11:22,740 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=20, state=SUCCESS; OpenRegionProcedure 0b9647966cbbf3a3683d7d737d062e73, server=jenkins-hbase4.apache.org,45471,1690049478954 in 311 msec 2023-07-22 18:11:22,742 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:22,751 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:22,752 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0b9647966cbbf3a3683d7d737d062e73, ASSIGN in 480 msec 2023-07-22 18:11:22,760 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7afb95b2b8f66881030302e3e19e632c; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9567946240, jitterRate=-0.10891556739807129}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:22,760 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7afb95b2b8f66881030302e3e19e632c: 2023-07-22 18:11:22,762 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c., pid=27, masterSystemTime=1690049482578 2023-07-22 18:11:22,765 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. 2023-07-22 18:11:22,765 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. 2023-07-22 18:11:22,767 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=7afb95b2b8f66881030302e3e19e632c, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:22,767 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049482767"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049482767"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049482767"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049482767"}]},"ts":"1690049482767"} 2023-07-22 18:11:22,775 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=22 2023-07-22 18:11:22,775 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=22, state=SUCCESS; OpenRegionProcedure 7afb95b2b8f66881030302e3e19e632c, server=jenkins-hbase4.apache.org,45471,1690049478954 in 339 msec 2023-07-22 18:11:22,781 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=18 2023-07-22 18:11:22,781 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7afb95b2b8f66881030302e3e19e632c, ASSIGN in 515 msec 2023-07-22 18:11:22,784 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 18:11:22,785 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049482785"}]},"ts":"1690049482785"} 2023-07-22 18:11:22,787 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-22 18:11:22,795 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 18:11:22,799 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 975 msec 2023-07-22 18:11:22,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-22 18:11:22,957 INFO [Listener at localhost/37829] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 18 completed 2023-07-22 18:11:22,957 DEBUG [Listener at localhost/37829] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-22 18:11:22,959 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:22,963 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33411] ipc.CallRunner(144): callId: 49 service: ClientService methodName: Scan size: 95 connection: 172.31.14.131:37268 deadline: 1690049542963, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45471 startCode=1690049478954. As of locationSeqNum=15. 2023-07-22 18:11:23,067 DEBUG [hconnection-0x5a2c0b37-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 18:11:23,072 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59790, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 18:11:23,083 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-22 18:11:23,084 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:23,084 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-22 18:11:23,085 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:23,090 DEBUG [Listener at localhost/37829] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 18:11:23,096 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56472, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 18:11:23,099 DEBUG [Listener at localhost/37829] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 18:11:23,103 INFO [RS-EventLoopGroup-5-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38898, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 18:11:23,104 DEBUG [Listener at localhost/37829] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 18:11:23,108 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:39416, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 18:11:23,109 DEBUG [Listener at localhost/37829] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 18:11:23,111 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59800, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 18:11:23,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-22 18:11:23,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 18:11:23,126 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:23,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:23,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:23,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:23,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:23,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:23,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:23,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(345): Moving region f697d57573425a043e6da37a27af9c2f to RSGroup Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:23,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:23,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:23,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:23,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:23,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:23,147 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f697d57573425a043e6da37a27af9c2f, REOPEN/MOVE 2023-07-22 18:11:23,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(345): Moving region 0b9647966cbbf3a3683d7d737d062e73 to RSGroup Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:23,148 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f697d57573425a043e6da37a27af9c2f, REOPEN/MOVE 2023-07-22 18:11:23,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:23,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:23,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:23,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:23,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:23,149 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=f697d57573425a043e6da37a27af9c2f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:23,149 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049483149"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049483149"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049483149"}]},"ts":"1690049483149"} 2023-07-22 18:11:23,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0b9647966cbbf3a3683d7d737d062e73, REOPEN/MOVE 2023-07-22 18:11:23,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(345): Moving region 970de01dbb336fa7f28008075b40701f to RSGroup Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:23,150 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0b9647966cbbf3a3683d7d737d062e73, REOPEN/MOVE 2023-07-22 18:11:23,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:23,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:23,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:23,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:23,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:23,151 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=0b9647966cbbf3a3683d7d737d062e73, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:23,152 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049483151"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049483151"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049483151"}]},"ts":"1690049483151"} 2023-07-22 18:11:23,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=970de01dbb336fa7f28008075b40701f, REOPEN/MOVE 2023-07-22 18:11:23,152 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(345): Moving region 7afb95b2b8f66881030302e3e19e632c to RSGroup Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:23,153 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=970de01dbb336fa7f28008075b40701f, REOPEN/MOVE 2023-07-22 18:11:23,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:23,153 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=32, ppid=29, state=RUNNABLE; CloseRegionProcedure f697d57573425a043e6da37a27af9c2f, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:23,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:23,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:23,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:23,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:23,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7afb95b2b8f66881030302e3e19e632c, REOPEN/MOVE 2023-07-22 18:11:23,159 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=970de01dbb336fa7f28008075b40701f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:23,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(345): Moving region a46810d87c2fafa3237a5a28bde8a685 to RSGroup Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:23,161 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7afb95b2b8f66881030302e3e19e632c, REOPEN/MOVE 2023-07-22 18:11:23,161 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690049481816.970de01dbb336fa7f28008075b40701f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049483159"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049483159"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049483159"}]},"ts":"1690049483159"} 2023-07-22 18:11:23,161 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=34, ppid=30, state=RUNNABLE; CloseRegionProcedure 0b9647966cbbf3a3683d7d737d062e73, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:23,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:23,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:23,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:23,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:23,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:23,163 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=31, state=RUNNABLE; CloseRegionProcedure 970de01dbb336fa7f28008075b40701f, server=jenkins-hbase4.apache.org,38977,1690049474061}] 2023-07-22 18:11:23,163 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=7afb95b2b8f66881030302e3e19e632c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:23,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a46810d87c2fafa3237a5a28bde8a685, REOPEN/MOVE 2023-07-22 18:11:23,164 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049483163"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049483163"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049483163"}]},"ts":"1690049483163"} 2023-07-22 18:11:23,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_1384651503, current retry=0 2023-07-22 18:11:23,165 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a46810d87c2fafa3237a5a28bde8a685, REOPEN/MOVE 2023-07-22 18:11:23,166 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=a46810d87c2fafa3237a5a28bde8a685, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:23,166 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049483166"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049483166"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049483166"}]},"ts":"1690049483166"} 2023-07-22 18:11:23,167 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=33, state=RUNNABLE; CloseRegionProcedure 7afb95b2b8f66881030302e3e19e632c, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:23,168 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=35, state=RUNNABLE; CloseRegionProcedure a46810d87c2fafa3237a5a28bde8a685, server=jenkins-hbase4.apache.org,38977,1690049474061}] 2023-07-22 18:11:23,314 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:23,316 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0b9647966cbbf3a3683d7d737d062e73, disabling compactions & flushes 2023-07-22 18:11:23,316 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. 2023-07-22 18:11:23,316 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. 2023-07-22 18:11:23,316 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. after waiting 0 ms 2023-07-22 18:11:23,316 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. 2023-07-22 18:11:23,318 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:23,319 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a46810d87c2fafa3237a5a28bde8a685, disabling compactions & flushes 2023-07-22 18:11:23,319 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. 2023-07-22 18:11:23,319 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. 2023-07-22 18:11:23,319 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. after waiting 0 ms 2023-07-22 18:11:23,319 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. 2023-07-22 18:11:23,328 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:23,329 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:23,329 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. 2023-07-22 18:11:23,329 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0b9647966cbbf3a3683d7d737d062e73: 2023-07-22 18:11:23,329 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 0b9647966cbbf3a3683d7d737d062e73 move to jenkins-hbase4.apache.org,33411,1690049473844 record at close sequenceid=2 2023-07-22 18:11:23,331 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. 2023-07-22 18:11:23,331 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a46810d87c2fafa3237a5a28bde8a685: 2023-07-22 18:11:23,331 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding a46810d87c2fafa3237a5a28bde8a685 move to jenkins-hbase4.apache.org,33411,1690049473844 record at close sequenceid=2 2023-07-22 18:11:23,334 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:23,334 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:23,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7afb95b2b8f66881030302e3e19e632c, disabling compactions & flushes 2023-07-22 18:11:23,335 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. 2023-07-22 18:11:23,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. 2023-07-22 18:11:23,335 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. after waiting 0 ms 2023-07-22 18:11:23,336 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. 2023-07-22 18:11:23,336 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=0b9647966cbbf3a3683d7d737d062e73, regionState=CLOSED 2023-07-22 18:11:23,336 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049483336"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049483336"}]},"ts":"1690049483336"} 2023-07-22 18:11:23,337 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:23,337 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:23,339 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 970de01dbb336fa7f28008075b40701f, disabling compactions & flushes 2023-07-22 18:11:23,339 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. 2023-07-22 18:11:23,339 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. 2023-07-22 18:11:23,339 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. after waiting 0 ms 2023-07-22 18:11:23,339 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. 2023-07-22 18:11:23,340 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=a46810d87c2fafa3237a5a28bde8a685, regionState=CLOSED 2023-07-22 18:11:23,340 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049483340"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049483340"}]},"ts":"1690049483340"} 2023-07-22 18:11:23,351 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=34, resume processing ppid=30 2023-07-22 18:11:23,351 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=34, ppid=30, state=SUCCESS; CloseRegionProcedure 0b9647966cbbf3a3683d7d737d062e73, server=jenkins-hbase4.apache.org,45471,1690049478954 in 186 msec 2023-07-22 18:11:23,351 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=35 2023-07-22 18:11:23,351 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=35, state=SUCCESS; CloseRegionProcedure a46810d87c2fafa3237a5a28bde8a685, server=jenkins-hbase4.apache.org,38977,1690049474061 in 179 msec 2023-07-22 18:11:23,358 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0b9647966cbbf3a3683d7d737d062e73, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33411,1690049473844; forceNewPlan=false, retain=false 2023-07-22 18:11:23,358 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=35, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a46810d87c2fafa3237a5a28bde8a685, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33411,1690049473844; forceNewPlan=false, retain=false 2023-07-22 18:11:23,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:23,365 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. 2023-07-22 18:11:23,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7afb95b2b8f66881030302e3e19e632c: 2023-07-22 18:11:23,365 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7afb95b2b8f66881030302e3e19e632c move to jenkins-hbase4.apache.org,38507,1690049474291 record at close sequenceid=2 2023-07-22 18:11:23,367 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:23,369 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:23,370 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:23,370 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. 2023-07-22 18:11:23,371 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 970de01dbb336fa7f28008075b40701f: 2023-07-22 18:11:23,371 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 970de01dbb336fa7f28008075b40701f move to jenkins-hbase4.apache.org,33411,1690049473844 record at close sequenceid=2 2023-07-22 18:11:23,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f697d57573425a043e6da37a27af9c2f, disabling compactions & flushes 2023-07-22 18:11:23,372 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. 2023-07-22 18:11:23,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. 2023-07-22 18:11:23,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. after waiting 0 ms 2023-07-22 18:11:23,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. 2023-07-22 18:11:23,372 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=7afb95b2b8f66881030302e3e19e632c, regionState=CLOSED 2023-07-22 18:11:23,372 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049483372"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049483372"}]},"ts":"1690049483372"} 2023-07-22 18:11:23,383 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:23,386 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. 2023-07-22 18:11:23,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f697d57573425a043e6da37a27af9c2f: 2023-07-22 18:11:23,386 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding f697d57573425a043e6da37a27af9c2f move to jenkins-hbase4.apache.org,38507,1690049474291 record at close sequenceid=2 2023-07-22 18:11:23,387 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:23,394 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:23,396 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=970de01dbb336fa7f28008075b40701f, regionState=CLOSED 2023-07-22 18:11:23,396 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690049481816.970de01dbb336fa7f28008075b40701f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049483396"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049483396"}]},"ts":"1690049483396"} 2023-07-22 18:11:23,396 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=f697d57573425a043e6da37a27af9c2f, regionState=CLOSED 2023-07-22 18:11:23,396 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049483396"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049483396"}]},"ts":"1690049483396"} 2023-07-22 18:11:23,398 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=33 2023-07-22 18:11:23,398 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=33, state=SUCCESS; CloseRegionProcedure 7afb95b2b8f66881030302e3e19e632c, server=jenkins-hbase4.apache.org,45471,1690049478954 in 225 msec 2023-07-22 18:11:23,401 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=33, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7afb95b2b8f66881030302e3e19e632c, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38507,1690049474291; forceNewPlan=false, retain=false 2023-07-22 18:11:23,404 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=31 2023-07-22 18:11:23,404 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=31, state=SUCCESS; CloseRegionProcedure 970de01dbb336fa7f28008075b40701f, server=jenkins-hbase4.apache.org,38977,1690049474061 in 236 msec 2023-07-22 18:11:23,404 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=32, resume processing ppid=29 2023-07-22 18:11:23,404 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=32, ppid=29, state=SUCCESS; CloseRegionProcedure f697d57573425a043e6da37a27af9c2f, server=jenkins-hbase4.apache.org,45471,1690049478954 in 247 msec 2023-07-22 18:11:23,405 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=970de01dbb336fa7f28008075b40701f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33411,1690049473844; forceNewPlan=false, retain=false 2023-07-22 18:11:23,405 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f697d57573425a043e6da37a27af9c2f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38507,1690049474291; forceNewPlan=false, retain=false 2023-07-22 18:11:23,508 INFO [jenkins-hbase4:40289] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-22 18:11:23,508 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=7afb95b2b8f66881030302e3e19e632c, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:23,508 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=a46810d87c2fafa3237a5a28bde8a685, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:23,509 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049483508"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049483508"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049483508"}]},"ts":"1690049483508"} 2023-07-22 18:11:23,508 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=f697d57573425a043e6da37a27af9c2f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:23,508 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=0b9647966cbbf3a3683d7d737d062e73, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:23,509 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049483508"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049483508"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049483508"}]},"ts":"1690049483508"} 2023-07-22 18:11:23,509 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049483508"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049483508"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049483508"}]},"ts":"1690049483508"} 2023-07-22 18:11:23,508 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=970de01dbb336fa7f28008075b40701f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:23,509 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690049481816.970de01dbb336fa7f28008075b40701f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049483508"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049483508"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049483508"}]},"ts":"1690049483508"} 2023-07-22 18:11:23,509 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049483508"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049483508"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049483508"}]},"ts":"1690049483508"} 2023-07-22 18:11:23,512 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=33, state=RUNNABLE; OpenRegionProcedure 7afb95b2b8f66881030302e3e19e632c, server=jenkins-hbase4.apache.org,38507,1690049474291}] 2023-07-22 18:11:23,513 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=29, state=RUNNABLE; OpenRegionProcedure f697d57573425a043e6da37a27af9c2f, server=jenkins-hbase4.apache.org,38507,1690049474291}] 2023-07-22 18:11:23,515 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=30, state=RUNNABLE; OpenRegionProcedure 0b9647966cbbf3a3683d7d737d062e73, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:23,516 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=31, state=RUNNABLE; OpenRegionProcedure 970de01dbb336fa7f28008075b40701f, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:23,519 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=35, state=RUNNABLE; OpenRegionProcedure a46810d87c2fafa3237a5a28bde8a685, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:23,665 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:23,665 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 18:11:23,667 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38906, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 18:11:23,671 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. 2023-07-22 18:11:23,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f697d57573425a043e6da37a27af9c2f, NAME => 'Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-22 18:11:23,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:23,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:23,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:23,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:23,678 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. 2023-07-22 18:11:23,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => a46810d87c2fafa3237a5a28bde8a685, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-22 18:11:23,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:23,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:23,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:23,679 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:23,683 INFO [StoreOpener-f697d57573425a043e6da37a27af9c2f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:23,683 INFO [StoreOpener-a46810d87c2fafa3237a5a28bde8a685-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:23,687 DEBUG [StoreOpener-f697d57573425a043e6da37a27af9c2f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f/f 2023-07-22 18:11:23,687 DEBUG [StoreOpener-a46810d87c2fafa3237a5a28bde8a685-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685/f 2023-07-22 18:11:23,687 DEBUG [StoreOpener-f697d57573425a043e6da37a27af9c2f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f/f 2023-07-22 18:11:23,687 DEBUG [StoreOpener-a46810d87c2fafa3237a5a28bde8a685-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685/f 2023-07-22 18:11:23,687 INFO [StoreOpener-f697d57573425a043e6da37a27af9c2f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f697d57573425a043e6da37a27af9c2f columnFamilyName f 2023-07-22 18:11:23,688 INFO [StoreOpener-a46810d87c2fafa3237a5a28bde8a685-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region a46810d87c2fafa3237a5a28bde8a685 columnFamilyName f 2023-07-22 18:11:23,688 INFO [StoreOpener-f697d57573425a043e6da37a27af9c2f-1] regionserver.HStore(310): Store=f697d57573425a043e6da37a27af9c2f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:23,689 INFO [StoreOpener-a46810d87c2fafa3237a5a28bde8a685-1] regionserver.HStore(310): Store=a46810d87c2fafa3237a5a28bde8a685/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:23,690 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:23,690 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:23,692 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:23,693 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:23,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:23,702 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:23,702 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f697d57573425a043e6da37a27af9c2f; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11300336480, jitterRate=0.052425846457481384}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:23,702 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f697d57573425a043e6da37a27af9c2f: 2023-07-22 18:11:23,704 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened a46810d87c2fafa3237a5a28bde8a685; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12023738560, jitterRate=0.11979791522026062}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:23,704 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for a46810d87c2fafa3237a5a28bde8a685: 2023-07-22 18:11:23,705 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f., pid=40, masterSystemTime=1690049483665 2023-07-22 18:11:23,709 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685., pid=43, masterSystemTime=1690049483670 2023-07-22 18:11:23,712 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. 2023-07-22 18:11:23,713 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. 2023-07-22 18:11:23,713 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. 2023-07-22 18:11:23,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7afb95b2b8f66881030302e3e19e632c, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-22 18:11:23,713 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=f697d57573425a043e6da37a27af9c2f, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:23,714 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049483713"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049483713"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049483713"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049483713"}]},"ts":"1690049483713"} 2023-07-22 18:11:23,714 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:23,714 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:23,714 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:23,714 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:23,715 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. 2023-07-22 18:11:23,715 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. 2023-07-22 18:11:23,715 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. 2023-07-22 18:11:23,715 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 0b9647966cbbf3a3683d7d737d062e73, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-22 18:11:23,715 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:23,716 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:23,716 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:23,716 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:23,716 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=35 updating hbase:meta row=a46810d87c2fafa3237a5a28bde8a685, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:23,717 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049483716"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049483716"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049483716"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049483716"}]},"ts":"1690049483716"} 2023-07-22 18:11:23,719 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=29 2023-07-22 18:11:23,720 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=29, state=SUCCESS; OpenRegionProcedure f697d57573425a043e6da37a27af9c2f, server=jenkins-hbase4.apache.org,38507,1690049474291 in 204 msec 2023-07-22 18:11:23,720 INFO [StoreOpener-0b9647966cbbf3a3683d7d737d062e73-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:23,721 INFO [StoreOpener-7afb95b2b8f66881030302e3e19e632c-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:23,722 DEBUG [StoreOpener-0b9647966cbbf3a3683d7d737d062e73-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73/f 2023-07-22 18:11:23,722 DEBUG [StoreOpener-0b9647966cbbf3a3683d7d737d062e73-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73/f 2023-07-22 18:11:23,723 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f697d57573425a043e6da37a27af9c2f, REOPEN/MOVE in 574 msec 2023-07-22 18:11:23,723 INFO [StoreOpener-0b9647966cbbf3a3683d7d737d062e73-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 0b9647966cbbf3a3683d7d737d062e73 columnFamilyName f 2023-07-22 18:11:23,723 DEBUG [StoreOpener-7afb95b2b8f66881030302e3e19e632c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c/f 2023-07-22 18:11:23,723 DEBUG [StoreOpener-7afb95b2b8f66881030302e3e19e632c-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c/f 2023-07-22 18:11:23,724 INFO [StoreOpener-7afb95b2b8f66881030302e3e19e632c-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7afb95b2b8f66881030302e3e19e632c columnFamilyName f 2023-07-22 18:11:23,724 INFO [StoreOpener-0b9647966cbbf3a3683d7d737d062e73-1] regionserver.HStore(310): Store=0b9647966cbbf3a3683d7d737d062e73/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:23,727 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=35, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a46810d87c2fafa3237a5a28bde8a685, REOPEN/MOVE in 563 msec 2023-07-22 18:11:23,725 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=35 2023-07-22 18:11:23,725 INFO [StoreOpener-7afb95b2b8f66881030302e3e19e632c-1] regionserver.HStore(310): Store=7afb95b2b8f66881030302e3e19e632c/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:23,727 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=35, state=SUCCESS; OpenRegionProcedure a46810d87c2fafa3237a5a28bde8a685, server=jenkins-hbase4.apache.org,33411,1690049473844 in 202 msec 2023-07-22 18:11:23,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:23,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:23,730 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:23,730 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:23,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:23,735 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 0b9647966cbbf3a3683d7d737d062e73; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9632180320, jitterRate=-0.10293330252170563}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:23,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:23,735 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 0b9647966cbbf3a3683d7d737d062e73: 2023-07-22 18:11:23,736 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73., pid=41, masterSystemTime=1690049483670 2023-07-22 18:11:23,737 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7afb95b2b8f66881030302e3e19e632c; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11305886240, jitterRate=0.052942708134651184}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:23,737 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7afb95b2b8f66881030302e3e19e632c: 2023-07-22 18:11:23,738 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c., pid=39, masterSystemTime=1690049483665 2023-07-22 18:11:23,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. 2023-07-22 18:11:23,739 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. 2023-07-22 18:11:23,739 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. 2023-07-22 18:11:23,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 970de01dbb336fa7f28008075b40701f, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-22 18:11:23,740 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:23,740 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:23,740 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=0b9647966cbbf3a3683d7d737d062e73, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:23,740 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:23,740 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:23,740 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049483740"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049483740"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049483740"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049483740"}]},"ts":"1690049483740"} 2023-07-22 18:11:23,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. 2023-07-22 18:11:23,741 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. 2023-07-22 18:11:23,742 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=33 updating hbase:meta row=7afb95b2b8f66881030302e3e19e632c, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:23,742 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049483741"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049483741"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049483741"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049483741"}]},"ts":"1690049483741"} 2023-07-22 18:11:23,743 INFO [StoreOpener-970de01dbb336fa7f28008075b40701f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:23,744 DEBUG [StoreOpener-970de01dbb336fa7f28008075b40701f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f/f 2023-07-22 18:11:23,745 DEBUG [StoreOpener-970de01dbb336fa7f28008075b40701f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f/f 2023-07-22 18:11:23,745 INFO [StoreOpener-970de01dbb336fa7f28008075b40701f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 970de01dbb336fa7f28008075b40701f columnFamilyName f 2023-07-22 18:11:23,746 INFO [StoreOpener-970de01dbb336fa7f28008075b40701f-1] regionserver.HStore(310): Store=970de01dbb336fa7f28008075b40701f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:23,747 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=30 2023-07-22 18:11:23,747 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=30, state=SUCCESS; OpenRegionProcedure 0b9647966cbbf3a3683d7d737d062e73, server=jenkins-hbase4.apache.org,33411,1690049473844 in 228 msec 2023-07-22 18:11:23,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:23,748 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=33 2023-07-22 18:11:23,748 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=33, state=SUCCESS; OpenRegionProcedure 7afb95b2b8f66881030302e3e19e632c, server=jenkins-hbase4.apache.org,38507,1690049474291 in 232 msec 2023-07-22 18:11:23,749 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0b9647966cbbf3a3683d7d737d062e73, REOPEN/MOVE in 599 msec 2023-07-22 18:11:23,750 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:23,751 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=33, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7afb95b2b8f66881030302e3e19e632c, REOPEN/MOVE in 594 msec 2023-07-22 18:11:23,755 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:23,756 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 970de01dbb336fa7f28008075b40701f; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11114149120, jitterRate=0.03508579730987549}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:23,756 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 970de01dbb336fa7f28008075b40701f: 2023-07-22 18:11:23,757 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f., pid=42, masterSystemTime=1690049483670 2023-07-22 18:11:23,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. 2023-07-22 18:11:23,759 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. 2023-07-22 18:11:23,761 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=970de01dbb336fa7f28008075b40701f, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:23,762 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690049481816.970de01dbb336fa7f28008075b40701f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049483761"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049483761"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049483761"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049483761"}]},"ts":"1690049483761"} 2023-07-22 18:11:23,766 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=31 2023-07-22 18:11:23,766 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=31, state=SUCCESS; OpenRegionProcedure 970de01dbb336fa7f28008075b40701f, server=jenkins-hbase4.apache.org,33411,1690049473844 in 248 msec 2023-07-22 18:11:23,768 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=31, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=970de01dbb336fa7f28008075b40701f, REOPEN/MOVE in 615 msec 2023-07-22 18:11:24,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure.ProcedureSyncWait(216): waitFor pid=29 2023-07-22 18:11:24,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_1384651503. 2023-07-22 18:11:24,165 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:24,170 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:24,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:24,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-22 18:11:24,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 18:11:24,176 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:24,183 INFO [Listener at localhost/37829] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-22 18:11:24,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-22 18:11:24,198 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=44, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-22 18:11:24,205 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049484205"}]},"ts":"1690049484205"} 2023-07-22 18:11:24,207 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-22 18:11:24,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-22 18:11:24,209 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-22 18:11:24,215 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f697d57573425a043e6da37a27af9c2f, UNASSIGN}, {pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0b9647966cbbf3a3683d7d737d062e73, UNASSIGN}, {pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=970de01dbb336fa7f28008075b40701f, UNASSIGN}, {pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7afb95b2b8f66881030302e3e19e632c, UNASSIGN}, {pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a46810d87c2fafa3237a5a28bde8a685, UNASSIGN}] 2023-07-22 18:11:24,217 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0b9647966cbbf3a3683d7d737d062e73, UNASSIGN 2023-07-22 18:11:24,217 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=970de01dbb336fa7f28008075b40701f, UNASSIGN 2023-07-22 18:11:24,217 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f697d57573425a043e6da37a27af9c2f, UNASSIGN 2023-07-22 18:11:24,217 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7afb95b2b8f66881030302e3e19e632c, UNASSIGN 2023-07-22 18:11:24,218 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a46810d87c2fafa3237a5a28bde8a685, UNASSIGN 2023-07-22 18:11:24,220 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=0b9647966cbbf3a3683d7d737d062e73, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:24,220 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049484220"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049484220"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049484220"}]},"ts":"1690049484220"} 2023-07-22 18:11:24,221 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=a46810d87c2fafa3237a5a28bde8a685, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:24,221 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=7afb95b2b8f66881030302e3e19e632c, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:24,221 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049484220"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049484220"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049484220"}]},"ts":"1690049484220"} 2023-07-22 18:11:24,221 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=f697d57573425a043e6da37a27af9c2f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:24,221 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=970de01dbb336fa7f28008075b40701f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:24,221 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049484221"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049484221"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049484221"}]},"ts":"1690049484221"} 2023-07-22 18:11:24,221 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690049481816.970de01dbb336fa7f28008075b40701f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049484220"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049484220"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049484220"}]},"ts":"1690049484220"} 2023-07-22 18:11:24,221 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049484221"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049484221"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049484221"}]},"ts":"1690049484221"} 2023-07-22 18:11:24,223 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=46, state=RUNNABLE; CloseRegionProcedure 0b9647966cbbf3a3683d7d737d062e73, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:24,225 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=49, state=RUNNABLE; CloseRegionProcedure a46810d87c2fafa3237a5a28bde8a685, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:24,227 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=52, ppid=48, state=RUNNABLE; CloseRegionProcedure 7afb95b2b8f66881030302e3e19e632c, server=jenkins-hbase4.apache.org,38507,1690049474291}] 2023-07-22 18:11:24,233 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=47, state=RUNNABLE; CloseRegionProcedure 970de01dbb336fa7f28008075b40701f, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:24,234 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=45, state=RUNNABLE; CloseRegionProcedure f697d57573425a043e6da37a27af9c2f, server=jenkins-hbase4.apache.org,38507,1690049474291}] 2023-07-22 18:11:24,309 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-22 18:11:24,378 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:24,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 970de01dbb336fa7f28008075b40701f, disabling compactions & flushes 2023-07-22 18:11:24,380 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. 2023-07-22 18:11:24,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. 2023-07-22 18:11:24,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. after waiting 0 ms 2023-07-22 18:11:24,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. 2023-07-22 18:11:24,382 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:24,383 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7afb95b2b8f66881030302e3e19e632c, disabling compactions & flushes 2023-07-22 18:11:24,383 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. 2023-07-22 18:11:24,383 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. 2023-07-22 18:11:24,383 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. after waiting 0 ms 2023-07-22 18:11:24,383 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. 2023-07-22 18:11:24,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 18:11:24,391 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f. 2023-07-22 18:11:24,391 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 970de01dbb336fa7f28008075b40701f: 2023-07-22 18:11:24,396 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:24,396 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:24,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 0b9647966cbbf3a3683d7d737d062e73, disabling compactions & flushes 2023-07-22 18:11:24,397 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. 2023-07-22 18:11:24,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. 2023-07-22 18:11:24,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. after waiting 0 ms 2023-07-22 18:11:24,398 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. 2023-07-22 18:11:24,399 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 18:11:24,399 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=970de01dbb336fa7f28008075b40701f, regionState=CLOSED 2023-07-22 18:11:24,400 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690049481816.970de01dbb336fa7f28008075b40701f.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049484399"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049484399"}]},"ts":"1690049484399"} 2023-07-22 18:11:24,403 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c. 2023-07-22 18:11:24,404 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7afb95b2b8f66881030302e3e19e632c: 2023-07-22 18:11:24,406 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=47 2023-07-22 18:11:24,406 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=47, state=SUCCESS; CloseRegionProcedure 970de01dbb336fa7f28008075b40701f, server=jenkins-hbase4.apache.org,33411,1690049473844 in 170 msec 2023-07-22 18:11:24,407 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:24,407 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:24,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f697d57573425a043e6da37a27af9c2f, disabling compactions & flushes 2023-07-22 18:11:24,409 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. 2023-07-22 18:11:24,409 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. 2023-07-22 18:11:24,409 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. after waiting 0 ms 2023-07-22 18:11:24,409 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. 2023-07-22 18:11:24,409 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 18:11:24,410 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=970de01dbb336fa7f28008075b40701f, UNASSIGN in 195 msec 2023-07-22 18:11:24,410 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=7afb95b2b8f66881030302e3e19e632c, regionState=CLOSED 2023-07-22 18:11:24,410 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049484410"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049484410"}]},"ts":"1690049484410"} 2023-07-22 18:11:24,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73. 2023-07-22 18:11:24,410 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 0b9647966cbbf3a3683d7d737d062e73: 2023-07-22 18:11:24,415 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:24,415 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:24,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing a46810d87c2fafa3237a5a28bde8a685, disabling compactions & flushes 2023-07-22 18:11:24,416 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. 2023-07-22 18:11:24,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. 2023-07-22 18:11:24,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. after waiting 0 ms 2023-07-22 18:11:24,416 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. 2023-07-22 18:11:24,416 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=0b9647966cbbf3a3683d7d737d062e73, regionState=CLOSED 2023-07-22 18:11:24,417 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049484416"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049484416"}]},"ts":"1690049484416"} 2023-07-22 18:11:24,426 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 18:11:24,427 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f. 2023-07-22 18:11:24,427 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f697d57573425a043e6da37a27af9c2f: 2023-07-22 18:11:24,427 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 18:11:24,428 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685. 2023-07-22 18:11:24,428 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for a46810d87c2fafa3237a5a28bde8a685: 2023-07-22 18:11:24,432 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=52, resume processing ppid=48 2023-07-22 18:11:24,432 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=48, state=SUCCESS; CloseRegionProcedure 7afb95b2b8f66881030302e3e19e632c, server=jenkins-hbase4.apache.org,38507,1690049474291 in 189 msec 2023-07-22 18:11:24,432 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:24,433 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:24,434 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=f697d57573425a043e6da37a27af9c2f, regionState=CLOSED 2023-07-22 18:11:24,434 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049484434"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049484434"}]},"ts":"1690049484434"} 2023-07-22 18:11:24,434 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=a46810d87c2fafa3237a5a28bde8a685, regionState=CLOSED 2023-07-22 18:11:24,434 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049484434"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049484434"}]},"ts":"1690049484434"} 2023-07-22 18:11:24,435 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=7afb95b2b8f66881030302e3e19e632c, UNASSIGN in 221 msec 2023-07-22 18:11:24,437 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=46 2023-07-22 18:11:24,437 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=46, state=SUCCESS; CloseRegionProcedure 0b9647966cbbf3a3683d7d737d062e73, server=jenkins-hbase4.apache.org,33411,1690049473844 in 209 msec 2023-07-22 18:11:24,443 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=0b9647966cbbf3a3683d7d737d062e73, UNASSIGN in 226 msec 2023-07-22 18:11:24,444 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=45 2023-07-22 18:11:24,444 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=49 2023-07-22 18:11:24,444 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=45, state=SUCCESS; CloseRegionProcedure f697d57573425a043e6da37a27af9c2f, server=jenkins-hbase4.apache.org,38507,1690049474291 in 202 msec 2023-07-22 18:11:24,444 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=49, state=SUCCESS; CloseRegionProcedure a46810d87c2fafa3237a5a28bde8a685, server=jenkins-hbase4.apache.org,33411,1690049473844 in 212 msec 2023-07-22 18:11:24,446 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f697d57573425a043e6da37a27af9c2f, UNASSIGN in 233 msec 2023-07-22 18:11:24,447 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=44 2023-07-22 18:11:24,447 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=a46810d87c2fafa3237a5a28bde8a685, UNASSIGN in 233 msec 2023-07-22 18:11:24,448 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049484448"}]},"ts":"1690049484448"} 2023-07-22 18:11:24,451 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-22 18:11:24,453 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-22 18:11:24,456 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=44, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 263 msec 2023-07-22 18:11:24,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-22 18:11:24,511 INFO [Listener at localhost/37829] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 44 completed 2023-07-22 18:11:24,516 INFO [Listener at localhost/37829] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-22 18:11:24,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-22 18:11:24,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=55, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-22 18:11:24,538 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-22 18:11:24,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-22 18:11:24,554 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:24,554 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:24,554 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:24,554 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:24,554 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:24,559 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c/f, FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c/recovered.edits] 2023-07-22 18:11:24,560 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f/f, FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f/recovered.edits] 2023-07-22 18:11:24,561 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f/f, FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f/recovered.edits] 2023-07-22 18:11:24,561 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685/f, FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685/recovered.edits] 2023-07-22 18:11:24,562 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73/f, FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73/recovered.edits] 2023-07-22 18:11:24,588 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c/recovered.edits/7.seqid to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/archive/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c/recovered.edits/7.seqid 2023-07-22 18:11:24,590 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/7afb95b2b8f66881030302e3e19e632c 2023-07-22 18:11:24,590 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73/recovered.edits/7.seqid to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/archive/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73/recovered.edits/7.seqid 2023-07-22 18:11:24,590 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685/recovered.edits/7.seqid to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/archive/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685/recovered.edits/7.seqid 2023-07-22 18:11:24,590 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f/recovered.edits/7.seqid to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/archive/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f/recovered.edits/7.seqid 2023-07-22 18:11:24,592 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/0b9647966cbbf3a3683d7d737d062e73 2023-07-22 18:11:24,592 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f/recovered.edits/7.seqid to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/archive/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f/recovered.edits/7.seqid 2023-07-22 18:11:24,592 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/970de01dbb336fa7f28008075b40701f 2023-07-22 18:11:24,592 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/a46810d87c2fafa3237a5a28bde8a685 2023-07-22 18:11:24,594 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f697d57573425a043e6da37a27af9c2f 2023-07-22 18:11:24,594 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-22 18:11:24,628 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-22 18:11:24,633 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-22 18:11:24,634 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-22 18:11:24,634 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049484634"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:24,635 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049484634"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:24,635 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690049481816.970de01dbb336fa7f28008075b40701f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049484634"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:24,635 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049484634"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:24,635 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049484634"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:24,641 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-22 18:11:24,642 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-22 18:11:24,642 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => f697d57573425a043e6da37a27af9c2f, NAME => 'Group_testTableMoveTruncateAndDrop,,1690049481816.f697d57573425a043e6da37a27af9c2f.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 0b9647966cbbf3a3683d7d737d062e73, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690049481816.0b9647966cbbf3a3683d7d737d062e73.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 970de01dbb336fa7f28008075b40701f, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049481816.970de01dbb336fa7f28008075b40701f.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 7afb95b2b8f66881030302e3e19e632c, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049481816.7afb95b2b8f66881030302e3e19e632c.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => a46810d87c2fafa3237a5a28bde8a685, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690049481816.a46810d87c2fafa3237a5a28bde8a685.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-22 18:11:24,643 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-22 18:11:24,643 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690049484643"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:24,647 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-22 18:11:24,656 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f2918ca009c99dd74adfcc7f13d88016 2023-07-22 18:11:24,656 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9924dcdf3bdbbbbb9620ee7a07d5b1bb 2023-07-22 18:11:24,656 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/99fdb0e9d5c0e9aa42850a932bb0c3ab 2023-07-22 18:11:24,656 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/720ff31aa7524390203a8b59788dc7f2 2023-07-22 18:11:24,656 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/def3a82b4bc6348eede9adeb882a0cdf 2023-07-22 18:11:24,657 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f2918ca009c99dd74adfcc7f13d88016 empty. 2023-07-22 18:11:24,657 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/99fdb0e9d5c0e9aa42850a932bb0c3ab empty. 2023-07-22 18:11:24,657 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9924dcdf3bdbbbbb9620ee7a07d5b1bb empty. 2023-07-22 18:11:24,657 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/def3a82b4bc6348eede9adeb882a0cdf empty. 2023-07-22 18:11:24,657 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/720ff31aa7524390203a8b59788dc7f2 empty. 2023-07-22 18:11:24,658 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f2918ca009c99dd74adfcc7f13d88016 2023-07-22 18:11:24,658 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/99fdb0e9d5c0e9aa42850a932bb0c3ab 2023-07-22 18:11:24,658 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9924dcdf3bdbbbbb9620ee7a07d5b1bb 2023-07-22 18:11:24,658 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/720ff31aa7524390203a8b59788dc7f2 2023-07-22 18:11:24,658 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/def3a82b4bc6348eede9adeb882a0cdf 2023-07-22 18:11:24,658 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-22 18:11:24,700 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:24,703 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 720ff31aa7524390203a8b59788dc7f2, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:24,704 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => def3a82b4bc6348eede9adeb882a0cdf, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:24,704 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => f2918ca009c99dd74adfcc7f13d88016, NAME => 'Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:24,796 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:24,796 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 720ff31aa7524390203a8b59788dc7f2, disabling compactions & flushes 2023-07-22 18:11:24,796 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2. 2023-07-22 18:11:24,796 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2. 2023-07-22 18:11:24,797 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2. after waiting 0 ms 2023-07-22 18:11:24,797 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2. 2023-07-22 18:11:24,797 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2. 2023-07-22 18:11:24,797 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 720ff31aa7524390203a8b59788dc7f2: 2023-07-22 18:11:24,797 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 99fdb0e9d5c0e9aa42850a932bb0c3ab, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:24,820 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:24,820 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 99fdb0e9d5c0e9aa42850a932bb0c3ab, disabling compactions & flushes 2023-07-22 18:11:24,820 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab. 2023-07-22 18:11:24,820 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab. 2023-07-22 18:11:24,820 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab. after waiting 0 ms 2023-07-22 18:11:24,820 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab. 2023-07-22 18:11:24,820 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab. 2023-07-22 18:11:24,820 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 99fdb0e9d5c0e9aa42850a932bb0c3ab: 2023-07-22 18:11:24,821 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 9924dcdf3bdbbbbb9620ee7a07d5b1bb, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:24,841 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:24,841 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 9924dcdf3bdbbbbb9620ee7a07d5b1bb, disabling compactions & flushes 2023-07-22 18:11:24,841 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb. 2023-07-22 18:11:24,841 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb. 2023-07-22 18:11:24,841 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb. after waiting 0 ms 2023-07-22 18:11:24,841 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb. 2023-07-22 18:11:24,841 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb. 2023-07-22 18:11:24,841 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 9924dcdf3bdbbbbb9620ee7a07d5b1bb: 2023-07-22 18:11:24,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-22 18:11:25,144 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-22 18:11:25,185 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:25,185 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:25,185 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing def3a82b4bc6348eede9adeb882a0cdf, disabling compactions & flushes 2023-07-22 18:11:25,185 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing f2918ca009c99dd74adfcc7f13d88016, disabling compactions & flushes 2023-07-22 18:11:25,185 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf. 2023-07-22 18:11:25,185 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf. 2023-07-22 18:11:25,185 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016. 2023-07-22 18:11:25,185 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf. after waiting 0 ms 2023-07-22 18:11:25,185 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf. 2023-07-22 18:11:25,185 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016. 2023-07-22 18:11:25,185 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf. 2023-07-22 18:11:25,185 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016. after waiting 0 ms 2023-07-22 18:11:25,185 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016. 2023-07-22 18:11:25,185 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for def3a82b4bc6348eede9adeb882a0cdf: 2023-07-22 18:11:25,185 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016. 2023-07-22 18:11:25,185 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for f2918ca009c99dd74adfcc7f13d88016: 2023-07-22 18:11:25,190 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049485189"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049485189"}]},"ts":"1690049485189"} 2023-07-22 18:11:25,190 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049485189"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049485189"}]},"ts":"1690049485189"} 2023-07-22 18:11:25,190 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049485189"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049485189"}]},"ts":"1690049485189"} 2023-07-22 18:11:25,190 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049485189"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049485189"}]},"ts":"1690049485189"} 2023-07-22 18:11:25,190 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049485189"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049485189"}]},"ts":"1690049485189"} 2023-07-22 18:11:25,193 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-22 18:11:25,195 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049485195"}]},"ts":"1690049485195"} 2023-07-22 18:11:25,197 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-22 18:11:25,202 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:25,202 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:25,202 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:25,202 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:25,202 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f2918ca009c99dd74adfcc7f13d88016, ASSIGN}, {pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=def3a82b4bc6348eede9adeb882a0cdf, ASSIGN}, {pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=720ff31aa7524390203a8b59788dc7f2, ASSIGN}, {pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=99fdb0e9d5c0e9aa42850a932bb0c3ab, ASSIGN}, {pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9924dcdf3bdbbbbb9620ee7a07d5b1bb, ASSIGN}] 2023-07-22 18:11:25,205 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f2918ca009c99dd74adfcc7f13d88016, ASSIGN 2023-07-22 18:11:25,205 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=720ff31aa7524390203a8b59788dc7f2, ASSIGN 2023-07-22 18:11:25,206 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=def3a82b4bc6348eede9adeb882a0cdf, ASSIGN 2023-07-22 18:11:25,206 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=99fdb0e9d5c0e9aa42850a932bb0c3ab, ASSIGN 2023-07-22 18:11:25,206 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9924dcdf3bdbbbbb9620ee7a07d5b1bb, ASSIGN 2023-07-22 18:11:25,208 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f2918ca009c99dd74adfcc7f13d88016, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38507,1690049474291; forceNewPlan=false, retain=false 2023-07-22 18:11:25,208 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9924dcdf3bdbbbbb9620ee7a07d5b1bb, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38507,1690049474291; forceNewPlan=false, retain=false 2023-07-22 18:11:25,208 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=def3a82b4bc6348eede9adeb882a0cdf, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33411,1690049473844; forceNewPlan=false, retain=false 2023-07-22 18:11:25,208 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=99fdb0e9d5c0e9aa42850a932bb0c3ab, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33411,1690049473844; forceNewPlan=false, retain=false 2023-07-22 18:11:25,214 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=720ff31aa7524390203a8b59788dc7f2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38507,1690049474291; forceNewPlan=false, retain=false 2023-07-22 18:11:25,358 INFO [jenkins-hbase4:40289] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-22 18:11:25,362 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=9924dcdf3bdbbbbb9620ee7a07d5b1bb, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:25,362 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=f2918ca009c99dd74adfcc7f13d88016, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:25,362 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=def3a82b4bc6348eede9adeb882a0cdf, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:25,362 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049485362"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049485362"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049485362"}]},"ts":"1690049485362"} 2023-07-22 18:11:25,362 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049485362"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049485362"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049485362"}]},"ts":"1690049485362"} 2023-07-22 18:11:25,362 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=720ff31aa7524390203a8b59788dc7f2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:25,362 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=99fdb0e9d5c0e9aa42850a932bb0c3ab, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:25,363 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049485362"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049485362"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049485362"}]},"ts":"1690049485362"} 2023-07-22 18:11:25,363 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049485362"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049485362"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049485362"}]},"ts":"1690049485362"} 2023-07-22 18:11:25,362 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049485362"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049485362"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049485362"}]},"ts":"1690049485362"} 2023-07-22 18:11:25,365 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE; OpenRegionProcedure 9924dcdf3bdbbbbb9620ee7a07d5b1bb, server=jenkins-hbase4.apache.org,38507,1690049474291}] 2023-07-22 18:11:25,366 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=57, state=RUNNABLE; OpenRegionProcedure def3a82b4bc6348eede9adeb882a0cdf, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:25,368 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=63, ppid=58, state=RUNNABLE; OpenRegionProcedure 720ff31aa7524390203a8b59788dc7f2, server=jenkins-hbase4.apache.org,38507,1690049474291}] 2023-07-22 18:11:25,370 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=59, state=RUNNABLE; OpenRegionProcedure 99fdb0e9d5c0e9aa42850a932bb0c3ab, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:25,373 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=56, state=RUNNABLE; OpenRegionProcedure f2918ca009c99dd74adfcc7f13d88016, server=jenkins-hbase4.apache.org,38507,1690049474291}] 2023-07-22 18:11:25,524 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2. 2023-07-22 18:11:25,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 720ff31aa7524390203a8b59788dc7f2, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-22 18:11:25,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 720ff31aa7524390203a8b59788dc7f2 2023-07-22 18:11:25,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:25,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 720ff31aa7524390203a8b59788dc7f2 2023-07-22 18:11:25,525 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 720ff31aa7524390203a8b59788dc7f2 2023-07-22 18:11:25,532 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab. 2023-07-22 18:11:25,533 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 99fdb0e9d5c0e9aa42850a932bb0c3ab, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-22 18:11:25,533 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 99fdb0e9d5c0e9aa42850a932bb0c3ab 2023-07-22 18:11:25,533 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:25,533 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 99fdb0e9d5c0e9aa42850a932bb0c3ab 2023-07-22 18:11:25,533 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 99fdb0e9d5c0e9aa42850a932bb0c3ab 2023-07-22 18:11:25,538 INFO [StoreOpener-720ff31aa7524390203a8b59788dc7f2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 720ff31aa7524390203a8b59788dc7f2 2023-07-22 18:11:25,539 INFO [StoreOpener-99fdb0e9d5c0e9aa42850a932bb0c3ab-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 99fdb0e9d5c0e9aa42850a932bb0c3ab 2023-07-22 18:11:25,540 DEBUG [StoreOpener-720ff31aa7524390203a8b59788dc7f2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/720ff31aa7524390203a8b59788dc7f2/f 2023-07-22 18:11:25,541 DEBUG [StoreOpener-720ff31aa7524390203a8b59788dc7f2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/720ff31aa7524390203a8b59788dc7f2/f 2023-07-22 18:11:25,541 DEBUG [StoreOpener-99fdb0e9d5c0e9aa42850a932bb0c3ab-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/99fdb0e9d5c0e9aa42850a932bb0c3ab/f 2023-07-22 18:11:25,541 DEBUG [StoreOpener-99fdb0e9d5c0e9aa42850a932bb0c3ab-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/99fdb0e9d5c0e9aa42850a932bb0c3ab/f 2023-07-22 18:11:25,541 INFO [StoreOpener-720ff31aa7524390203a8b59788dc7f2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 720ff31aa7524390203a8b59788dc7f2 columnFamilyName f 2023-07-22 18:11:25,541 INFO [StoreOpener-99fdb0e9d5c0e9aa42850a932bb0c3ab-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 99fdb0e9d5c0e9aa42850a932bb0c3ab columnFamilyName f 2023-07-22 18:11:25,542 INFO [StoreOpener-720ff31aa7524390203a8b59788dc7f2-1] regionserver.HStore(310): Store=720ff31aa7524390203a8b59788dc7f2/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:25,543 INFO [StoreOpener-99fdb0e9d5c0e9aa42850a932bb0c3ab-1] regionserver.HStore(310): Store=99fdb0e9d5c0e9aa42850a932bb0c3ab/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:25,543 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/720ff31aa7524390203a8b59788dc7f2 2023-07-22 18:11:25,544 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/99fdb0e9d5c0e9aa42850a932bb0c3ab 2023-07-22 18:11:25,545 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/99fdb0e9d5c0e9aa42850a932bb0c3ab 2023-07-22 18:11:25,548 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/720ff31aa7524390203a8b59788dc7f2 2023-07-22 18:11:25,552 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 99fdb0e9d5c0e9aa42850a932bb0c3ab 2023-07-22 18:11:25,554 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 720ff31aa7524390203a8b59788dc7f2 2023-07-22 18:11:25,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/99fdb0e9d5c0e9aa42850a932bb0c3ab/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:25,558 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 99fdb0e9d5c0e9aa42850a932bb0c3ab; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11326367520, jitterRate=0.054850175976753235}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:25,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 99fdb0e9d5c0e9aa42850a932bb0c3ab: 2023-07-22 18:11:25,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/720ff31aa7524390203a8b59788dc7f2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:25,559 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 720ff31aa7524390203a8b59788dc7f2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9633982880, jitterRate=-0.10276542603969574}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:25,559 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 720ff31aa7524390203a8b59788dc7f2: 2023-07-22 18:11:25,560 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab., pid=64, masterSystemTime=1690049485523 2023-07-22 18:11:25,560 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2., pid=63, masterSystemTime=1690049485520 2023-07-22 18:11:25,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab. 2023-07-22 18:11:25,562 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab. 2023-07-22 18:11:25,562 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf. 2023-07-22 18:11:25,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => def3a82b4bc6348eede9adeb882a0cdf, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-22 18:11:25,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop def3a82b4bc6348eede9adeb882a0cdf 2023-07-22 18:11:25,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:25,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for def3a82b4bc6348eede9adeb882a0cdf 2023-07-22 18:11:25,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for def3a82b4bc6348eede9adeb882a0cdf 2023-07-22 18:11:25,563 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=99fdb0e9d5c0e9aa42850a932bb0c3ab, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:25,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2. 2023-07-22 18:11:25,563 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049485563"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049485563"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049485563"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049485563"}]},"ts":"1690049485563"} 2023-07-22 18:11:25,563 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2. 2023-07-22 18:11:25,563 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016. 2023-07-22 18:11:25,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => f2918ca009c99dd74adfcc7f13d88016, NAME => 'Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-22 18:11:25,564 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop f2918ca009c99dd74adfcc7f13d88016 2023-07-22 18:11:25,564 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:25,564 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for f2918ca009c99dd74adfcc7f13d88016 2023-07-22 18:11:25,564 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for f2918ca009c99dd74adfcc7f13d88016 2023-07-22 18:11:25,565 INFO [StoreOpener-def3a82b4bc6348eede9adeb882a0cdf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region def3a82b4bc6348eede9adeb882a0cdf 2023-07-22 18:11:25,566 INFO [StoreOpener-f2918ca009c99dd74adfcc7f13d88016-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region f2918ca009c99dd74adfcc7f13d88016 2023-07-22 18:11:25,567 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=720ff31aa7524390203a8b59788dc7f2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:25,567 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049485566"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049485566"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049485566"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049485566"}]},"ts":"1690049485566"} 2023-07-22 18:11:25,568 DEBUG [StoreOpener-def3a82b4bc6348eede9adeb882a0cdf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/def3a82b4bc6348eede9adeb882a0cdf/f 2023-07-22 18:11:25,569 DEBUG [StoreOpener-def3a82b4bc6348eede9adeb882a0cdf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/def3a82b4bc6348eede9adeb882a0cdf/f 2023-07-22 18:11:25,569 INFO [StoreOpener-def3a82b4bc6348eede9adeb882a0cdf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region def3a82b4bc6348eede9adeb882a0cdf columnFamilyName f 2023-07-22 18:11:25,569 DEBUG [StoreOpener-f2918ca009c99dd74adfcc7f13d88016-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/f2918ca009c99dd74adfcc7f13d88016/f 2023-07-22 18:11:25,569 DEBUG [StoreOpener-f2918ca009c99dd74adfcc7f13d88016-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/f2918ca009c99dd74adfcc7f13d88016/f 2023-07-22 18:11:25,570 INFO [StoreOpener-f2918ca009c99dd74adfcc7f13d88016-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region f2918ca009c99dd74adfcc7f13d88016 columnFamilyName f 2023-07-22 18:11:25,570 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=59 2023-07-22 18:11:25,570 INFO [StoreOpener-def3a82b4bc6348eede9adeb882a0cdf-1] regionserver.HStore(310): Store=def3a82b4bc6348eede9adeb882a0cdf/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:25,572 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/def3a82b4bc6348eede9adeb882a0cdf 2023-07-22 18:11:25,573 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/def3a82b4bc6348eede9adeb882a0cdf 2023-07-22 18:11:25,573 INFO [StoreOpener-f2918ca009c99dd74adfcc7f13d88016-1] regionserver.HStore(310): Store=f2918ca009c99dd74adfcc7f13d88016/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:25,575 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=99fdb0e9d5c0e9aa42850a932bb0c3ab, ASSIGN in 368 msec 2023-07-22 18:11:25,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/f2918ca009c99dd74adfcc7f13d88016 2023-07-22 18:11:25,570 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=59, state=SUCCESS; OpenRegionProcedure 99fdb0e9d5c0e9aa42850a932bb0c3ab, server=jenkins-hbase4.apache.org,33411,1690049473844 in 195 msec 2023-07-22 18:11:25,575 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/f2918ca009c99dd74adfcc7f13d88016 2023-07-22 18:11:25,578 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for def3a82b4bc6348eede9adeb882a0cdf 2023-07-22 18:11:25,579 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=63, resume processing ppid=58 2023-07-22 18:11:25,579 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=58, state=SUCCESS; OpenRegionProcedure 720ff31aa7524390203a8b59788dc7f2, server=jenkins-hbase4.apache.org,38507,1690049474291 in 204 msec 2023-07-22 18:11:25,581 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for f2918ca009c99dd74adfcc7f13d88016 2023-07-22 18:11:25,581 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=720ff31aa7524390203a8b59788dc7f2, ASSIGN in 377 msec 2023-07-22 18:11:25,595 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/def3a82b4bc6348eede9adeb882a0cdf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:25,596 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/f2918ca009c99dd74adfcc7f13d88016/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:25,597 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened f2918ca009c99dd74adfcc7f13d88016; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11836443040, jitterRate=0.10235466063022614}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:25,597 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for f2918ca009c99dd74adfcc7f13d88016: 2023-07-22 18:11:25,598 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened def3a82b4bc6348eede9adeb882a0cdf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10867680480, jitterRate=0.012131616473197937}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:25,598 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for def3a82b4bc6348eede9adeb882a0cdf: 2023-07-22 18:11:25,598 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016., pid=65, masterSystemTime=1690049485520 2023-07-22 18:11:25,599 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf., pid=62, masterSystemTime=1690049485523 2023-07-22 18:11:25,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016. 2023-07-22 18:11:25,600 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016. 2023-07-22 18:11:25,600 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb. 2023-07-22 18:11:25,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9924dcdf3bdbbbbb9620ee7a07d5b1bb, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-22 18:11:25,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9924dcdf3bdbbbbb9620ee7a07d5b1bb 2023-07-22 18:11:25,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:25,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9924dcdf3bdbbbbb9620ee7a07d5b1bb 2023-07-22 18:11:25,601 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9924dcdf3bdbbbbb9620ee7a07d5b1bb 2023-07-22 18:11:25,602 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=f2918ca009c99dd74adfcc7f13d88016, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:25,602 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049485602"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049485602"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049485602"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049485602"}]},"ts":"1690049485602"} 2023-07-22 18:11:25,604 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf. 2023-07-22 18:11:25,604 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf. 2023-07-22 18:11:25,605 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=def3a82b4bc6348eede9adeb882a0cdf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:25,605 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049485605"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049485605"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049485605"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049485605"}]},"ts":"1690049485605"} 2023-07-22 18:11:25,608 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=56 2023-07-22 18:11:25,608 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=56, state=SUCCESS; OpenRegionProcedure f2918ca009c99dd74adfcc7f13d88016, server=jenkins-hbase4.apache.org,38507,1690049474291 in 234 msec 2023-07-22 18:11:25,611 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f2918ca009c99dd74adfcc7f13d88016, ASSIGN in 406 msec 2023-07-22 18:11:25,612 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=57 2023-07-22 18:11:25,612 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=57, state=SUCCESS; OpenRegionProcedure def3a82b4bc6348eede9adeb882a0cdf, server=jenkins-hbase4.apache.org,33411,1690049473844 in 242 msec 2023-07-22 18:11:25,614 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=def3a82b4bc6348eede9adeb882a0cdf, ASSIGN in 410 msec 2023-07-22 18:11:25,615 INFO [StoreOpener-9924dcdf3bdbbbbb9620ee7a07d5b1bb-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9924dcdf3bdbbbbb9620ee7a07d5b1bb 2023-07-22 18:11:25,617 DEBUG [StoreOpener-9924dcdf3bdbbbbb9620ee7a07d5b1bb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/9924dcdf3bdbbbbb9620ee7a07d5b1bb/f 2023-07-22 18:11:25,617 DEBUG [StoreOpener-9924dcdf3bdbbbbb9620ee7a07d5b1bb-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/9924dcdf3bdbbbbb9620ee7a07d5b1bb/f 2023-07-22 18:11:25,617 INFO [StoreOpener-9924dcdf3bdbbbbb9620ee7a07d5b1bb-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9924dcdf3bdbbbbb9620ee7a07d5b1bb columnFamilyName f 2023-07-22 18:11:25,618 INFO [StoreOpener-9924dcdf3bdbbbbb9620ee7a07d5b1bb-1] regionserver.HStore(310): Store=9924dcdf3bdbbbbb9620ee7a07d5b1bb/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:25,619 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/9924dcdf3bdbbbbb9620ee7a07d5b1bb 2023-07-22 18:11:25,620 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/9924dcdf3bdbbbbb9620ee7a07d5b1bb 2023-07-22 18:11:25,624 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9924dcdf3bdbbbbb9620ee7a07d5b1bb 2023-07-22 18:11:25,626 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/9924dcdf3bdbbbbb9620ee7a07d5b1bb/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:25,627 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9924dcdf3bdbbbbb9620ee7a07d5b1bb; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9774843040, jitterRate=-0.08964680135250092}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:25,627 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9924dcdf3bdbbbbb9620ee7a07d5b1bb: 2023-07-22 18:11:25,628 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb., pid=61, masterSystemTime=1690049485520 2023-07-22 18:11:25,629 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb. 2023-07-22 18:11:25,630 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb. 2023-07-22 18:11:25,630 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=9924dcdf3bdbbbbb9620ee7a07d5b1bb, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:25,631 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049485630"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049485630"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049485630"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049485630"}]},"ts":"1690049485630"} 2023-07-22 18:11:25,637 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=60 2023-07-22 18:11:25,637 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; OpenRegionProcedure 9924dcdf3bdbbbbb9620ee7a07d5b1bb, server=jenkins-hbase4.apache.org,38507,1690049474291 in 267 msec 2023-07-22 18:11:25,639 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=60, resume processing ppid=55 2023-07-22 18:11:25,639 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9924dcdf3bdbbbbb9620ee7a07d5b1bb, ASSIGN in 435 msec 2023-07-22 18:11:25,639 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049485639"}]},"ts":"1690049485639"} 2023-07-22 18:11:25,641 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-22 18:11:25,643 DEBUG [PEWorker-3] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-22 18:11:25,644 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=55, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 1.1180 sec 2023-07-22 18:11:25,645 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-22 18:11:25,646 INFO [Listener at localhost/37829] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 55 completed 2023-07-22 18:11:25,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:25,647 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:25,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:25,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:25,649 INFO [Listener at localhost/37829] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-22 18:11:25,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-22 18:11:25,651 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=66, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-22 18:11:25,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-22 18:11:25,655 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049485655"}]},"ts":"1690049485655"} 2023-07-22 18:11:25,657 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-22 18:11:25,658 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-22 18:11:25,659 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f2918ca009c99dd74adfcc7f13d88016, UNASSIGN}, {pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=def3a82b4bc6348eede9adeb882a0cdf, UNASSIGN}, {pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=720ff31aa7524390203a8b59788dc7f2, UNASSIGN}, {pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=99fdb0e9d5c0e9aa42850a932bb0c3ab, UNASSIGN}, {pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9924dcdf3bdbbbbb9620ee7a07d5b1bb, UNASSIGN}] 2023-07-22 18:11:25,661 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=720ff31aa7524390203a8b59788dc7f2, UNASSIGN 2023-07-22 18:11:25,661 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=def3a82b4bc6348eede9adeb882a0cdf, UNASSIGN 2023-07-22 18:11:25,662 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f2918ca009c99dd74adfcc7f13d88016, UNASSIGN 2023-07-22 18:11:25,662 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=99fdb0e9d5c0e9aa42850a932bb0c3ab, UNASSIGN 2023-07-22 18:11:25,663 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9924dcdf3bdbbbbb9620ee7a07d5b1bb, UNASSIGN 2023-07-22 18:11:25,663 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=720ff31aa7524390203a8b59788dc7f2, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:25,663 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049485663"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049485663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049485663"}]},"ts":"1690049485663"} 2023-07-22 18:11:25,663 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=def3a82b4bc6348eede9adeb882a0cdf, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:25,664 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049485663"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049485663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049485663"}]},"ts":"1690049485663"} 2023-07-22 18:11:25,664 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=f2918ca009c99dd74adfcc7f13d88016, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:25,664 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049485663"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049485663"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049485663"}]},"ts":"1690049485663"} 2023-07-22 18:11:25,665 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=99fdb0e9d5c0e9aa42850a932bb0c3ab, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:25,665 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=9924dcdf3bdbbbbb9620ee7a07d5b1bb, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:25,665 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049485665"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049485665"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049485665"}]},"ts":"1690049485665"} 2023-07-22 18:11:25,665 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049485665"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049485665"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049485665"}]},"ts":"1690049485665"} 2023-07-22 18:11:25,667 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=69, state=RUNNABLE; CloseRegionProcedure 720ff31aa7524390203a8b59788dc7f2, server=jenkins-hbase4.apache.org,38507,1690049474291}] 2023-07-22 18:11:25,668 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=68, state=RUNNABLE; CloseRegionProcedure def3a82b4bc6348eede9adeb882a0cdf, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:25,670 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=67, state=RUNNABLE; CloseRegionProcedure f2918ca009c99dd74adfcc7f13d88016, server=jenkins-hbase4.apache.org,38507,1690049474291}] 2023-07-22 18:11:25,671 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=70, state=RUNNABLE; CloseRegionProcedure 99fdb0e9d5c0e9aa42850a932bb0c3ab, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:25,673 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=71, state=RUNNABLE; CloseRegionProcedure 9924dcdf3bdbbbbb9620ee7a07d5b1bb, server=jenkins-hbase4.apache.org,38507,1690049474291}] 2023-07-22 18:11:25,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-22 18:11:25,825 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 720ff31aa7524390203a8b59788dc7f2 2023-07-22 18:11:25,825 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close def3a82b4bc6348eede9adeb882a0cdf 2023-07-22 18:11:25,827 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 720ff31aa7524390203a8b59788dc7f2, disabling compactions & flushes 2023-07-22 18:11:25,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing def3a82b4bc6348eede9adeb882a0cdf, disabling compactions & flushes 2023-07-22 18:11:25,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2. 2023-07-22 18:11:25,828 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf. 2023-07-22 18:11:25,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2. 2023-07-22 18:11:25,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf. 2023-07-22 18:11:25,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2. after waiting 0 ms 2023-07-22 18:11:25,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf. after waiting 0 ms 2023-07-22 18:11:25,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf. 2023-07-22 18:11:25,828 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2. 2023-07-22 18:11:25,834 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/720ff31aa7524390203a8b59788dc7f2/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:25,835 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/def3a82b4bc6348eede9adeb882a0cdf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:25,836 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2. 2023-07-22 18:11:25,836 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 720ff31aa7524390203a8b59788dc7f2: 2023-07-22 18:11:25,837 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf. 2023-07-22 18:11:25,837 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for def3a82b4bc6348eede9adeb882a0cdf: 2023-07-22 18:11:25,839 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 720ff31aa7524390203a8b59788dc7f2 2023-07-22 18:11:25,839 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close f2918ca009c99dd74adfcc7f13d88016 2023-07-22 18:11:25,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing f2918ca009c99dd74adfcc7f13d88016, disabling compactions & flushes 2023-07-22 18:11:25,840 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016. 2023-07-22 18:11:25,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016. 2023-07-22 18:11:25,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016. after waiting 0 ms 2023-07-22 18:11:25,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016. 2023-07-22 18:11:25,844 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=720ff31aa7524390203a8b59788dc7f2, regionState=CLOSED 2023-07-22 18:11:25,845 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049485844"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049485844"}]},"ts":"1690049485844"} 2023-07-22 18:11:25,845 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed def3a82b4bc6348eede9adeb882a0cdf 2023-07-22 18:11:25,845 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 99fdb0e9d5c0e9aa42850a932bb0c3ab 2023-07-22 18:11:25,846 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=def3a82b4bc6348eede9adeb882a0cdf, regionState=CLOSED 2023-07-22 18:11:25,847 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049485846"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049485846"}]},"ts":"1690049485846"} 2023-07-22 18:11:25,851 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=69 2023-07-22 18:11:25,851 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=69, state=SUCCESS; CloseRegionProcedure 720ff31aa7524390203a8b59788dc7f2, server=jenkins-hbase4.apache.org,38507,1690049474291 in 180 msec 2023-07-22 18:11:25,853 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=68 2023-07-22 18:11:25,853 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=720ff31aa7524390203a8b59788dc7f2, UNASSIGN in 192 msec 2023-07-22 18:11:25,853 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=68, state=SUCCESS; CloseRegionProcedure def3a82b4bc6348eede9adeb882a0cdf, server=jenkins-hbase4.apache.org,33411,1690049473844 in 181 msec 2023-07-22 18:11:25,854 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 99fdb0e9d5c0e9aa42850a932bb0c3ab, disabling compactions & flushes 2023-07-22 18:11:25,854 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab. 2023-07-22 18:11:25,854 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab. 2023-07-22 18:11:25,854 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab. after waiting 0 ms 2023-07-22 18:11:25,855 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab. 2023-07-22 18:11:25,859 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=def3a82b4bc6348eede9adeb882a0cdf, UNASSIGN in 194 msec 2023-07-22 18:11:25,860 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/f2918ca009c99dd74adfcc7f13d88016/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:25,861 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016. 2023-07-22 18:11:25,861 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for f2918ca009c99dd74adfcc7f13d88016: 2023-07-22 18:11:25,864 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed f2918ca009c99dd74adfcc7f13d88016 2023-07-22 18:11:25,864 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9924dcdf3bdbbbbb9620ee7a07d5b1bb 2023-07-22 18:11:25,866 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9924dcdf3bdbbbbb9620ee7a07d5b1bb, disabling compactions & flushes 2023-07-22 18:11:25,866 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/99fdb0e9d5c0e9aa42850a932bb0c3ab/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:25,866 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb. 2023-07-22 18:11:25,866 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb. 2023-07-22 18:11:25,866 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=f2918ca009c99dd74adfcc7f13d88016, regionState=CLOSED 2023-07-22 18:11:25,866 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb. after waiting 0 ms 2023-07-22 18:11:25,866 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb. 2023-07-22 18:11:25,867 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049485866"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049485866"}]},"ts":"1690049485866"} 2023-07-22 18:11:25,867 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab. 2023-07-22 18:11:25,867 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 99fdb0e9d5c0e9aa42850a932bb0c3ab: 2023-07-22 18:11:25,869 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 99fdb0e9d5c0e9aa42850a932bb0c3ab 2023-07-22 18:11:25,870 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=99fdb0e9d5c0e9aa42850a932bb0c3ab, regionState=CLOSED 2023-07-22 18:11:25,870 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1690049485870"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049485870"}]},"ts":"1690049485870"} 2023-07-22 18:11:25,872 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=67 2023-07-22 18:11:25,872 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=67, state=SUCCESS; CloseRegionProcedure f2918ca009c99dd74adfcc7f13d88016, server=jenkins-hbase4.apache.org,38507,1690049474291 in 199 msec 2023-07-22 18:11:25,875 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=f2918ca009c99dd74adfcc7f13d88016, UNASSIGN in 213 msec 2023-07-22 18:11:25,875 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=70 2023-07-22 18:11:25,875 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=70, state=SUCCESS; CloseRegionProcedure 99fdb0e9d5c0e9aa42850a932bb0c3ab, server=jenkins-hbase4.apache.org,33411,1690049473844 in 202 msec 2023-07-22 18:11:25,877 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=99fdb0e9d5c0e9aa42850a932bb0c3ab, UNASSIGN in 216 msec 2023-07-22 18:11:25,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testTableMoveTruncateAndDrop/9924dcdf3bdbbbbb9620ee7a07d5b1bb/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:25,881 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb. 2023-07-22 18:11:25,881 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9924dcdf3bdbbbbb9620ee7a07d5b1bb: 2023-07-22 18:11:25,882 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9924dcdf3bdbbbbb9620ee7a07d5b1bb 2023-07-22 18:11:25,883 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=9924dcdf3bdbbbbb9620ee7a07d5b1bb, regionState=CLOSED 2023-07-22 18:11:25,883 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1690049485883"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049485883"}]},"ts":"1690049485883"} 2023-07-22 18:11:25,886 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=71 2023-07-22 18:11:25,886 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=71, state=SUCCESS; CloseRegionProcedure 9924dcdf3bdbbbbb9620ee7a07d5b1bb, server=jenkins-hbase4.apache.org,38507,1690049474291 in 212 msec 2023-07-22 18:11:25,888 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=66 2023-07-22 18:11:25,889 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9924dcdf3bdbbbbb9620ee7a07d5b1bb, UNASSIGN in 227 msec 2023-07-22 18:11:25,890 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049485890"}]},"ts":"1690049485890"} 2023-07-22 18:11:25,893 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-22 18:11:25,900 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-22 18:11:25,904 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=66, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 253 msec 2023-07-22 18:11:25,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-22 18:11:25,958 INFO [Listener at localhost/37829] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 66 completed 2023-07-22 18:11:25,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-22 18:11:25,972 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-22 18:11:25,975 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-22 18:11:25,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_1384651503' 2023-07-22 18:11:25,977 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=77, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-22 18:11:25,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:25,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:25,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:25,980 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:25,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-22 18:11:25,990 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f2918ca009c99dd74adfcc7f13d88016 2023-07-22 18:11:25,990 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/99fdb0e9d5c0e9aa42850a932bb0c3ab 2023-07-22 18:11:25,990 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/720ff31aa7524390203a8b59788dc7f2 2023-07-22 18:11:25,990 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9924dcdf3bdbbbbb9620ee7a07d5b1bb 2023-07-22 18:11:25,990 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/def3a82b4bc6348eede9adeb882a0cdf 2023-07-22 18:11:25,993 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/720ff31aa7524390203a8b59788dc7f2/f, FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/720ff31aa7524390203a8b59788dc7f2/recovered.edits] 2023-07-22 18:11:25,993 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/99fdb0e9d5c0e9aa42850a932bb0c3ab/f, FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/99fdb0e9d5c0e9aa42850a932bb0c3ab/recovered.edits] 2023-07-22 18:11:25,994 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f2918ca009c99dd74adfcc7f13d88016/f, FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f2918ca009c99dd74adfcc7f13d88016/recovered.edits] 2023-07-22 18:11:25,994 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/def3a82b4bc6348eede9adeb882a0cdf/f, FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/def3a82b4bc6348eede9adeb882a0cdf/recovered.edits] 2023-07-22 18:11:25,994 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9924dcdf3bdbbbbb9620ee7a07d5b1bb/f, FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9924dcdf3bdbbbbb9620ee7a07d5b1bb/recovered.edits] 2023-07-22 18:11:26,005 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/720ff31aa7524390203a8b59788dc7f2/recovered.edits/4.seqid to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/archive/data/default/Group_testTableMoveTruncateAndDrop/720ff31aa7524390203a8b59788dc7f2/recovered.edits/4.seqid 2023-07-22 18:11:26,006 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/99fdb0e9d5c0e9aa42850a932bb0c3ab/recovered.edits/4.seqid to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/archive/data/default/Group_testTableMoveTruncateAndDrop/99fdb0e9d5c0e9aa42850a932bb0c3ab/recovered.edits/4.seqid 2023-07-22 18:11:26,006 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f2918ca009c99dd74adfcc7f13d88016/recovered.edits/4.seqid to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/archive/data/default/Group_testTableMoveTruncateAndDrop/f2918ca009c99dd74adfcc7f13d88016/recovered.edits/4.seqid 2023-07-22 18:11:26,006 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/def3a82b4bc6348eede9adeb882a0cdf/recovered.edits/4.seqid to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/archive/data/default/Group_testTableMoveTruncateAndDrop/def3a82b4bc6348eede9adeb882a0cdf/recovered.edits/4.seqid 2023-07-22 18:11:26,006 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9924dcdf3bdbbbbb9620ee7a07d5b1bb/recovered.edits/4.seqid to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/archive/data/default/Group_testTableMoveTruncateAndDrop/9924dcdf3bdbbbbb9620ee7a07d5b1bb/recovered.edits/4.seqid 2023-07-22 18:11:26,007 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/720ff31aa7524390203a8b59788dc7f2 2023-07-22 18:11:26,007 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/99fdb0e9d5c0e9aa42850a932bb0c3ab 2023-07-22 18:11:26,007 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/f2918ca009c99dd74adfcc7f13d88016 2023-07-22 18:11:26,007 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9924dcdf3bdbbbbb9620ee7a07d5b1bb 2023-07-22 18:11:26,007 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testTableMoveTruncateAndDrop/def3a82b4bc6348eede9adeb882a0cdf 2023-07-22 18:11:26,008 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-22 18:11:26,010 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=77, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-22 18:11:26,017 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-22 18:11:26,020 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-22 18:11:26,022 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=77, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-22 18:11:26,022 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-22 18:11:26,022 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049486022"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:26,022 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049486022"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:26,022 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049486022"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:26,023 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049486022"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:26,023 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049486022"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:26,025 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-22 18:11:26,025 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => f2918ca009c99dd74adfcc7f13d88016, NAME => 'Group_testTableMoveTruncateAndDrop,,1690049484596.f2918ca009c99dd74adfcc7f13d88016.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => def3a82b4bc6348eede9adeb882a0cdf, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1690049484597.def3a82b4bc6348eede9adeb882a0cdf.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 720ff31aa7524390203a8b59788dc7f2, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1690049484597.720ff31aa7524390203a8b59788dc7f2.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 99fdb0e9d5c0e9aa42850a932bb0c3ab, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1690049484597.99fdb0e9d5c0e9aa42850a932bb0c3ab.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 9924dcdf3bdbbbbb9620ee7a07d5b1bb, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1690049484597.9924dcdf3bdbbbbb9620ee7a07d5b1bb.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-22 18:11:26,025 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-22 18:11:26,025 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690049486025"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:26,027 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-22 18:11:26,029 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=77, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-22 18:11:26,031 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=77, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 63 msec 2023-07-22 18:11:26,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-22 18:11:26,091 INFO [Listener at localhost/37829] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 77 completed 2023-07-22 18:11:26,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:26,092 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:26,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:26,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:26,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:26,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:26,099 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:26,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507] to rsgroup default 2023-07-22 18:11:26,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:26,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:26,106 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:26,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:26,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_1384651503, current retry=0 2023-07-22 18:11:26,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33411,1690049473844, jenkins-hbase4.apache.org,38507,1690049474291] are moved back to Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:26,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_1384651503 => default 2023-07-22 18:11:26,124 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:26,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_1384651503 2023-07-22 18:11:26,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:26,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:26,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-22 18:11:26,141 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:26,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:26,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:26,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:26,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:26,144 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:26,145 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:26,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:26,152 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:26,153 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:26,157 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:26,159 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:26,161 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:26,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:26,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:26,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:26,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:26,175 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:26,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40289] to rsgroup master 2023-07-22 18:11:26,178 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:26,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 147 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34802 deadline: 1690050686178, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. 2023-07-22 18:11:26,179 WARN [Listener at localhost/37829] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:26,182 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:26,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:26,183 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:26,184 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:45471], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:26,185 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:26,185 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:26,221 INFO [Listener at localhost/37829] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=504 (was 424) Potentially hanging thread: PacketResponder: BP-773543169-172.31.14.131-1690049468130:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1400815784-640-acceptor-0@3c81b7eb-ServerConnector@6057e31f{HTTP/1.1, (http/1.1)}{0.0.0.0:36289} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-773543169-172.31.14.131-1690049468130:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_276401251_17 at /127.0.0.1:60870 [Receiving block BP-773543169-172.31.14.131-1690049468130:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf-prefix:jenkins-hbase4.apache.org,45471,1690049478954 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_276401251_17 at /127.0.0.1:60894 [Receiving block BP-773543169-172.31.14.131-1690049468130:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=45471 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:45471Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:45471 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1400815784-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-773543169-172.31.14.131-1690049468130:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1400815784-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7988d72f-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=45471 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=45471 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=45471 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_276401251_17 at /127.0.0.1:33366 [Receiving block BP-773543169-172.31.14.131-1690049468130:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_276401251_17 at /127.0.0.1:48094 [Receiving block BP-773543169-172.31.14.131-1690049468130:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_276401251_17 at /127.0.0.1:48122 [Receiving block BP-773543169-172.31.14.131-1690049468130:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7988d72f-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_740894810_17 at /127.0.0.1:38240 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:45471-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1400815784-639 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=45471 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=45471 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x7988d72f-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:43335 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-773543169-172.31.14.131-1690049468130:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1400815784-646 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-773543169-172.31.14.131-1690049468130:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=45471 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62144@0x3baf0687 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1294521150.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45471 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1400815784-645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62144@0x3baf0687-SendThread(127.0.0.1:62144) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-773543169-172.31.14.131-1690049468130:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7988d72f-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-70f9750-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45471 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1285588141_17 at /127.0.0.1:59914 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1400815784-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf-prefix:jenkins-hbase4.apache.org,45471,1690049478954.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_276401251_17 at /127.0.0.1:33334 [Receiving block BP-773543169-172.31.14.131-1690049468130:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (638480775) connection to localhost/127.0.0.1:43335 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:62144@0x3baf0687-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x7988d72f-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7988d72f-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1400815784-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45471 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_740894810_17 at /127.0.0.1:47058 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=809 (was 680) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=400 (was 389) - SystemLoadAverage LEAK? -, ProcessCount=174 (was 174), AvailableMemoryMB=6454 (was 6837) 2023-07-22 18:11:26,221 WARN [Listener at localhost/37829] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-22 18:11:26,246 INFO [Listener at localhost/37829] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=504, OpenFileDescriptor=809, MaxFileDescriptor=60000, SystemLoadAverage=400, ProcessCount=174, AvailableMemoryMB=6452 2023-07-22 18:11:26,246 WARN [Listener at localhost/37829] hbase.ResourceChecker(130): Thread=504 is superior to 500 2023-07-22 18:11:26,246 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-22 18:11:26,253 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:26,254 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:26,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:26,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:26,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:26,256 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:26,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:26,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:26,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:26,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:26,265 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:26,269 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:26,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:26,275 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:26,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:26,278 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:26,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:26,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:26,284 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:26,293 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40289] to rsgroup master 2023-07-22 18:11:26,293 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:26,294 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 175 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34802 deadline: 1690050686293, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. 2023-07-22 18:11:26,294 WARN [Listener at localhost/37829] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:26,296 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:26,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:26,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:26,298 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:45471], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:26,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:26,299 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:26,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-22 18:11:26,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:26,300 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 181 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:34802 deadline: 1690050686300, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-22 18:11:26,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-22 18:11:26,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:26,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 183 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:34802 deadline: 1690050686301, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-22 18:11:26,303 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-22 18:11:26,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:26,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:34802 deadline: 1690050686303, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-22 18:11:26,305 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-22 18:11:26,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-22 18:11:26,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:26,320 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:26,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:26,322 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:26,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:26,331 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:26,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:26,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:26,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:26,354 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:26,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:26,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:26,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:26,356 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-22 18:11:26,363 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:26,364 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:26,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-22 18:11:26,367 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:26,368 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:26,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:26,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:26,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:26,370 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:26,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:26,377 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:26,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:26,386 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:26,390 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:26,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:26,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:26,395 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:26,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:26,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:26,411 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:26,412 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:26,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40289] to rsgroup master 2023-07-22 18:11:26,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:26,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 219 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34802 deadline: 1690050686415, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. 2023-07-22 18:11:26,416 WARN [Listener at localhost/37829] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:26,418 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:26,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:26,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:26,420 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:45471], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:26,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:26,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:26,440 INFO [Listener at localhost/37829] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=507 (was 504) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=809 (was 809), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=400 (was 400), ProcessCount=174 (was 174), AvailableMemoryMB=6447 (was 6452) 2023-07-22 18:11:26,440 WARN [Listener at localhost/37829] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-22 18:11:26,465 INFO [Listener at localhost/37829] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=507, OpenFileDescriptor=809, MaxFileDescriptor=60000, SystemLoadAverage=400, ProcessCount=174, AvailableMemoryMB=6446 2023-07-22 18:11:26,465 WARN [Listener at localhost/37829] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-22 18:11:26,466 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-22 18:11:26,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:26,472 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:26,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:26,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:26,474 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:26,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:26,475 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:26,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:26,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:26,480 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:26,482 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:26,485 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:26,486 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:26,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:26,490 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:26,494 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:26,496 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:26,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:26,501 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:26,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40289] to rsgroup master 2023-07-22 18:11:26,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:26,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 247 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34802 deadline: 1690050686504, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. 2023-07-22 18:11:26,505 WARN [Listener at localhost/37829] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:26,506 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:26,507 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:26,508 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:26,508 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:45471], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:26,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:26,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:26,511 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:26,512 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:26,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:26,513 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:26,514 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-22 18:11:26,517 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:26,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-22 18:11:26,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:26,520 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:26,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:26,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:26,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:26,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:38507] to rsgroup bar 2023-07-22 18:11:26,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:26,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-22 18:11:26,540 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:26,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:26,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(238): Moving server region fe5e9f07ec9c7007b36085471b5cd477, which do not belong to RSGroup bar 2023-07-22 18:11:26,545 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=fe5e9f07ec9c7007b36085471b5cd477, REOPEN/MOVE 2023-07-22 18:11:26,545 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-22 18:11:26,547 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=fe5e9f07ec9c7007b36085471b5cd477, REOPEN/MOVE 2023-07-22 18:11:26,548 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=fe5e9f07ec9c7007b36085471b5cd477, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:26,548 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690049486548"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049486548"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049486548"}]},"ts":"1690049486548"} 2023-07-22 18:11:26,550 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE; CloseRegionProcedure fe5e9f07ec9c7007b36085471b5cd477, server=jenkins-hbase4.apache.org,38977,1690049474061}] 2023-07-22 18:11:26,705 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:26,706 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fe5e9f07ec9c7007b36085471b5cd477, disabling compactions & flushes 2023-07-22 18:11:26,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. 2023-07-22 18:11:26,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. 2023-07-22 18:11:26,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. after waiting 0 ms 2023-07-22 18:11:26,707 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. 2023-07-22 18:11:26,707 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing fe5e9f07ec9c7007b36085471b5cd477 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-22 18:11:26,733 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/namespace/fe5e9f07ec9c7007b36085471b5cd477/.tmp/info/48a05e0e95aa4eeab639654ebb3a0927 2023-07-22 18:11:26,743 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/namespace/fe5e9f07ec9c7007b36085471b5cd477/.tmp/info/48a05e0e95aa4eeab639654ebb3a0927 as hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/namespace/fe5e9f07ec9c7007b36085471b5cd477/info/48a05e0e95aa4eeab639654ebb3a0927 2023-07-22 18:11:26,751 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/namespace/fe5e9f07ec9c7007b36085471b5cd477/info/48a05e0e95aa4eeab639654ebb3a0927, entries=2, sequenceid=6, filesize=4.8 K 2023-07-22 18:11:26,752 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for fe5e9f07ec9c7007b36085471b5cd477 in 45ms, sequenceid=6, compaction requested=false 2023-07-22 18:11:26,761 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/namespace/fe5e9f07ec9c7007b36085471b5cd477/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-22 18:11:26,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. 2023-07-22 18:11:26,763 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fe5e9f07ec9c7007b36085471b5cd477: 2023-07-22 18:11:26,763 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding fe5e9f07ec9c7007b36085471b5cd477 move to jenkins-hbase4.apache.org,45471,1690049478954 record at close sequenceid=6 2023-07-22 18:11:26,765 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:26,767 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=fe5e9f07ec9c7007b36085471b5cd477, regionState=CLOSED 2023-07-22 18:11:26,767 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690049486767"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049486767"}]},"ts":"1690049486767"} 2023-07-22 18:11:26,772 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-22 18:11:26,772 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; CloseRegionProcedure fe5e9f07ec9c7007b36085471b5cd477, server=jenkins-hbase4.apache.org,38977,1690049474061 in 219 msec 2023-07-22 18:11:26,773 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=fe5e9f07ec9c7007b36085471b5cd477, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45471,1690049478954; forceNewPlan=false, retain=false 2023-07-22 18:11:26,924 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=fe5e9f07ec9c7007b36085471b5cd477, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:26,924 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690049486924"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049486924"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049486924"}]},"ts":"1690049486924"} 2023-07-22 18:11:26,926 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=78, state=RUNNABLE; OpenRegionProcedure fe5e9f07ec9c7007b36085471b5cd477, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:27,086 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. 2023-07-22 18:11:27,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fe5e9f07ec9c7007b36085471b5cd477, NAME => 'hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:27,086 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:27,087 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:27,087 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:27,087 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:27,088 INFO [StoreOpener-fe5e9f07ec9c7007b36085471b5cd477-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:27,089 DEBUG [StoreOpener-fe5e9f07ec9c7007b36085471b5cd477-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/namespace/fe5e9f07ec9c7007b36085471b5cd477/info 2023-07-22 18:11:27,089 DEBUG [StoreOpener-fe5e9f07ec9c7007b36085471b5cd477-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/namespace/fe5e9f07ec9c7007b36085471b5cd477/info 2023-07-22 18:11:27,090 INFO [StoreOpener-fe5e9f07ec9c7007b36085471b5cd477-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fe5e9f07ec9c7007b36085471b5cd477 columnFamilyName info 2023-07-22 18:11:27,097 DEBUG [StoreOpener-fe5e9f07ec9c7007b36085471b5cd477-1] regionserver.HStore(539): loaded hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/namespace/fe5e9f07ec9c7007b36085471b5cd477/info/48a05e0e95aa4eeab639654ebb3a0927 2023-07-22 18:11:27,097 INFO [StoreOpener-fe5e9f07ec9c7007b36085471b5cd477-1] regionserver.HStore(310): Store=fe5e9f07ec9c7007b36085471b5cd477/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:27,098 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/namespace/fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:27,100 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/namespace/fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:27,104 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:27,105 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fe5e9f07ec9c7007b36085471b5cd477; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10642914720, jitterRate=-0.008801326155662537}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:27,105 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fe5e9f07ec9c7007b36085471b5cd477: 2023-07-22 18:11:27,106 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477., pid=80, masterSystemTime=1690049487082 2023-07-22 18:11:27,116 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. 2023-07-22 18:11:27,116 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. 2023-07-22 18:11:27,118 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=fe5e9f07ec9c7007b36085471b5cd477, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:27,118 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690049487117"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049487117"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049487117"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049487117"}]},"ts":"1690049487117"} 2023-07-22 18:11:27,122 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=78 2023-07-22 18:11:27,122 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=78, state=SUCCESS; OpenRegionProcedure fe5e9f07ec9c7007b36085471b5cd477, server=jenkins-hbase4.apache.org,45471,1690049478954 in 194 msec 2023-07-22 18:11:27,124 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=fe5e9f07ec9c7007b36085471b5cd477, REOPEN/MOVE in 578 msec 2023-07-22 18:11:27,547 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure.ProcedureSyncWait(216): waitFor pid=78 2023-07-22 18:11:27,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33411,1690049473844, jenkins-hbase4.apache.org,38507,1690049474291, jenkins-hbase4.apache.org,38977,1690049474061] are moved back to default 2023-07-22 18:11:27,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-22 18:11:27,547 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:27,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:27,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:27,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-22 18:11:27,562 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:27,565 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:27,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-22 18:11:27,569 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 18:11:27,570 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 81 2023-07-22 18:11:27,571 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-22 18:11:27,574 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:27,574 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-22 18:11:27,575 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:27,576 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:27,578 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 18:11:27,580 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:27,581 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b empty. 2023-07-22 18:11:27,582 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:27,582 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-22 18:11:27,606 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:27,607 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1b8e2784650dc0301efc76b7e2fa617b, NAME => 'Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:27,620 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:27,620 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing 1b8e2784650dc0301efc76b7e2fa617b, disabling compactions & flushes 2023-07-22 18:11:27,620 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:27,620 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:27,620 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. after waiting 0 ms 2023-07-22 18:11:27,620 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:27,620 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:27,620 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for 1b8e2784650dc0301efc76b7e2fa617b: 2023-07-22 18:11:27,624 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 18:11:27,625 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690049487625"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049487625"}]},"ts":"1690049487625"} 2023-07-22 18:11:27,627 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 18:11:27,628 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 18:11:27,628 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049487628"}]},"ts":"1690049487628"} 2023-07-22 18:11:27,629 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-22 18:11:27,639 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=1b8e2784650dc0301efc76b7e2fa617b, ASSIGN}] 2023-07-22 18:11:27,641 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=1b8e2784650dc0301efc76b7e2fa617b, ASSIGN 2023-07-22 18:11:27,642 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=1b8e2784650dc0301efc76b7e2fa617b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45471,1690049478954; forceNewPlan=false, retain=false 2023-07-22 18:11:27,672 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-22 18:11:27,793 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=1b8e2784650dc0301efc76b7e2fa617b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:27,793 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690049487793"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049487793"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049487793"}]},"ts":"1690049487793"} 2023-07-22 18:11:27,795 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; OpenRegionProcedure 1b8e2784650dc0301efc76b7e2fa617b, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:27,874 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-22 18:11:27,951 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:27,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1b8e2784650dc0301efc76b7e2fa617b, NAME => 'Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:27,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:27,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:27,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:27,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:27,954 INFO [StoreOpener-1b8e2784650dc0301efc76b7e2fa617b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:27,956 DEBUG [StoreOpener-1b8e2784650dc0301efc76b7e2fa617b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b/f 2023-07-22 18:11:27,956 DEBUG [StoreOpener-1b8e2784650dc0301efc76b7e2fa617b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b/f 2023-07-22 18:11:27,956 INFO [StoreOpener-1b8e2784650dc0301efc76b7e2fa617b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1b8e2784650dc0301efc76b7e2fa617b columnFamilyName f 2023-07-22 18:11:27,960 INFO [StoreOpener-1b8e2784650dc0301efc76b7e2fa617b-1] regionserver.HStore(310): Store=1b8e2784650dc0301efc76b7e2fa617b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:27,961 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:27,962 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:27,966 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:27,969 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:27,970 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1b8e2784650dc0301efc76b7e2fa617b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10310300160, jitterRate=-0.03977847099304199}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:27,970 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1b8e2784650dc0301efc76b7e2fa617b: 2023-07-22 18:11:27,971 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b., pid=83, masterSystemTime=1690049487947 2023-07-22 18:11:27,973 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:27,973 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:27,974 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=1b8e2784650dc0301efc76b7e2fa617b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:27,974 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690049487974"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049487974"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049487974"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049487974"}]},"ts":"1690049487974"} 2023-07-22 18:11:27,983 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-22 18:11:27,983 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; OpenRegionProcedure 1b8e2784650dc0301efc76b7e2fa617b, server=jenkins-hbase4.apache.org,45471,1690049478954 in 180 msec 2023-07-22 18:11:27,985 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-22 18:11:27,985 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=1b8e2784650dc0301efc76b7e2fa617b, ASSIGN in 344 msec 2023-07-22 18:11:27,986 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 18:11:27,986 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049487986"}]},"ts":"1690049487986"} 2023-07-22 18:11:27,999 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-22 18:11:28,002 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 18:11:28,004 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 437 msec 2023-07-22 18:11:28,087 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-22 18:11:28,175 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-22 18:11:28,176 INFO [Listener at localhost/37829] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-22 18:11:28,176 DEBUG [Listener at localhost/37829] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-22 18:11:28,176 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:28,189 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-22 18:11:28,189 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:28,189 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-22 18:11:28,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-22 18:11:28,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:28,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-22 18:11:28,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:28,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:28,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-22 18:11:28,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(345): Moving region 1b8e2784650dc0301efc76b7e2fa617b to RSGroup bar 2023-07-22 18:11:28,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:28,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:28,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:28,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:28,204 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-22 18:11:28,205 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:28,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=1b8e2784650dc0301efc76b7e2fa617b, REOPEN/MOVE 2023-07-22 18:11:28,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-22 18:11:28,208 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=1b8e2784650dc0301efc76b7e2fa617b, REOPEN/MOVE 2023-07-22 18:11:28,210 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=1b8e2784650dc0301efc76b7e2fa617b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:28,210 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690049488210"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049488210"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049488210"}]},"ts":"1690049488210"} 2023-07-22 18:11:28,213 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure 1b8e2784650dc0301efc76b7e2fa617b, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:28,368 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:28,369 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1b8e2784650dc0301efc76b7e2fa617b, disabling compactions & flushes 2023-07-22 18:11:28,369 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:28,369 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:28,369 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. after waiting 0 ms 2023-07-22 18:11:28,369 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:28,377 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:28,379 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:28,379 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1b8e2784650dc0301efc76b7e2fa617b: 2023-07-22 18:11:28,379 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1b8e2784650dc0301efc76b7e2fa617b move to jenkins-hbase4.apache.org,38507,1690049474291 record at close sequenceid=2 2023-07-22 18:11:28,382 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:28,382 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=1b8e2784650dc0301efc76b7e2fa617b, regionState=CLOSED 2023-07-22 18:11:28,382 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690049488382"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049488382"}]},"ts":"1690049488382"} 2023-07-22 18:11:28,386 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-22 18:11:28,386 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure 1b8e2784650dc0301efc76b7e2fa617b, server=jenkins-hbase4.apache.org,45471,1690049478954 in 171 msec 2023-07-22 18:11:28,386 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=1b8e2784650dc0301efc76b7e2fa617b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38507,1690049474291; forceNewPlan=false, retain=false 2023-07-22 18:11:28,537 INFO [jenkins-hbase4:40289] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 18:11:28,537 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=1b8e2784650dc0301efc76b7e2fa617b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:28,537 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690049488537"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049488537"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049488537"}]},"ts":"1690049488537"} 2023-07-22 18:11:28,542 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure 1b8e2784650dc0301efc76b7e2fa617b, server=jenkins-hbase4.apache.org,38507,1690049474291}] 2023-07-22 18:11:28,698 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:28,698 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1b8e2784650dc0301efc76b7e2fa617b, NAME => 'Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:28,698 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:28,698 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:28,698 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:28,698 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:28,700 INFO [StoreOpener-1b8e2784650dc0301efc76b7e2fa617b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:28,702 DEBUG [StoreOpener-1b8e2784650dc0301efc76b7e2fa617b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b/f 2023-07-22 18:11:28,702 DEBUG [StoreOpener-1b8e2784650dc0301efc76b7e2fa617b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b/f 2023-07-22 18:11:28,702 INFO [StoreOpener-1b8e2784650dc0301efc76b7e2fa617b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1b8e2784650dc0301efc76b7e2fa617b columnFamilyName f 2023-07-22 18:11:28,703 INFO [StoreOpener-1b8e2784650dc0301efc76b7e2fa617b-1] regionserver.HStore(310): Store=1b8e2784650dc0301efc76b7e2fa617b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:28,704 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:28,706 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:28,710 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:28,712 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1b8e2784650dc0301efc76b7e2fa617b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11823813600, jitterRate=0.10117845237255096}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:28,712 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1b8e2784650dc0301efc76b7e2fa617b: 2023-07-22 18:11:28,713 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b., pid=86, masterSystemTime=1690049488693 2023-07-22 18:11:28,715 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:28,715 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:28,715 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=1b8e2784650dc0301efc76b7e2fa617b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:28,716 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690049488715"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049488715"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049488715"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049488715"}]},"ts":"1690049488715"} 2023-07-22 18:11:28,719 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-22 18:11:28,719 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure 1b8e2784650dc0301efc76b7e2fa617b, server=jenkins-hbase4.apache.org,38507,1690049474291 in 175 msec 2023-07-22 18:11:28,721 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=1b8e2784650dc0301efc76b7e2fa617b, REOPEN/MOVE in 514 msec 2023-07-22 18:11:29,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-22 18:11:29,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-22 18:11:29,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:29,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:29,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:29,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-22 18:11:29,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:29,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-22 18:11:29,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:29,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 285 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:34802 deadline: 1690050689216, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-22 18:11:29,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:38507] to rsgroup default 2023-07-22 18:11:29,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:29,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:34802 deadline: 1690050689218, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-22 18:11:29,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-22 18:11:29,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:29,223 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-22 18:11:29,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:29,224 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:29,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-22 18:11:29,226 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(345): Moving region 1b8e2784650dc0301efc76b7e2fa617b to RSGroup default 2023-07-22 18:11:29,226 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=1b8e2784650dc0301efc76b7e2fa617b, REOPEN/MOVE 2023-07-22 18:11:29,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-22 18:11:29,228 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=1b8e2784650dc0301efc76b7e2fa617b, REOPEN/MOVE 2023-07-22 18:11:29,228 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=1b8e2784650dc0301efc76b7e2fa617b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:29,228 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690049489228"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049489228"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049489228"}]},"ts":"1690049489228"} 2023-07-22 18:11:29,232 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure 1b8e2784650dc0301efc76b7e2fa617b, server=jenkins-hbase4.apache.org,38507,1690049474291}] 2023-07-22 18:11:29,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:29,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1b8e2784650dc0301efc76b7e2fa617b, disabling compactions & flushes 2023-07-22 18:11:29,388 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:29,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:29,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. after waiting 0 ms 2023-07-22 18:11:29,388 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:29,396 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 18:11:29,397 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:29,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1b8e2784650dc0301efc76b7e2fa617b: 2023-07-22 18:11:29,397 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1b8e2784650dc0301efc76b7e2fa617b move to jenkins-hbase4.apache.org,45471,1690049478954 record at close sequenceid=5 2023-07-22 18:11:29,399 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:29,400 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=1b8e2784650dc0301efc76b7e2fa617b, regionState=CLOSED 2023-07-22 18:11:29,400 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690049489400"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049489400"}]},"ts":"1690049489400"} 2023-07-22 18:11:29,403 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-22 18:11:29,403 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure 1b8e2784650dc0301efc76b7e2fa617b, server=jenkins-hbase4.apache.org,38507,1690049474291 in 171 msec 2023-07-22 18:11:29,404 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=1b8e2784650dc0301efc76b7e2fa617b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45471,1690049478954; forceNewPlan=false, retain=false 2023-07-22 18:11:29,554 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=1b8e2784650dc0301efc76b7e2fa617b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:29,555 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690049489554"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049489554"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049489554"}]},"ts":"1690049489554"} 2023-07-22 18:11:29,556 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure 1b8e2784650dc0301efc76b7e2fa617b, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:29,712 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:29,712 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1b8e2784650dc0301efc76b7e2fa617b, NAME => 'Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:29,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:29,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:29,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:29,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:29,714 INFO [StoreOpener-1b8e2784650dc0301efc76b7e2fa617b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:29,715 DEBUG [StoreOpener-1b8e2784650dc0301efc76b7e2fa617b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b/f 2023-07-22 18:11:29,716 DEBUG [StoreOpener-1b8e2784650dc0301efc76b7e2fa617b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b/f 2023-07-22 18:11:29,716 INFO [StoreOpener-1b8e2784650dc0301efc76b7e2fa617b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1b8e2784650dc0301efc76b7e2fa617b columnFamilyName f 2023-07-22 18:11:29,716 INFO [StoreOpener-1b8e2784650dc0301efc76b7e2fa617b-1] regionserver.HStore(310): Store=1b8e2784650dc0301efc76b7e2fa617b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:29,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:29,719 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:29,722 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:29,723 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1b8e2784650dc0301efc76b7e2fa617b; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10867878880, jitterRate=0.012150093913078308}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:29,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1b8e2784650dc0301efc76b7e2fa617b: 2023-07-22 18:11:29,724 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b., pid=89, masterSystemTime=1690049489708 2023-07-22 18:11:29,725 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:29,725 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:29,726 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=1b8e2784650dc0301efc76b7e2fa617b, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:29,726 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690049489726"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049489726"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049489726"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049489726"}]},"ts":"1690049489726"} 2023-07-22 18:11:29,729 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-22 18:11:29,729 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure 1b8e2784650dc0301efc76b7e2fa617b, server=jenkins-hbase4.apache.org,45471,1690049478954 in 171 msec 2023-07-22 18:11:29,730 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=1b8e2784650dc0301efc76b7e2fa617b, REOPEN/MOVE in 503 msec 2023-07-22 18:11:30,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-22 18:11:30,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-22 18:11:30,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:30,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:30,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:30,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-22 18:11:30,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:30,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 294 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:34802 deadline: 1690050690235, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-22 18:11:30,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:38507] to rsgroup default 2023-07-22 18:11:30,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:30,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-22 18:11:30,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:30,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:30,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-22 18:11:30,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33411,1690049473844, jenkins-hbase4.apache.org,38507,1690049474291, jenkins-hbase4.apache.org,38977,1690049474061] are moved back to bar 2023-07-22 18:11:30,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-22 18:11:30,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:30,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:30,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:30,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-22 18:11:30,247 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38977] ipc.CallRunner(144): callId: 214 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:49714 deadline: 1690049550247, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=45471 startCode=1690049478954. As of locationSeqNum=6. 2023-07-22 18:11:30,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:30,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:30,358 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-22 18:11:30,360 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:30,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:30,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:30,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:30,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:30,368 INFO [Listener at localhost/37829] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-22 18:11:30,369 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-22 18:11:30,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-22 18:11:30,374 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-22 18:11:30,375 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049490375"}]},"ts":"1690049490375"} 2023-07-22 18:11:30,377 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-22 18:11:30,381 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-22 18:11:30,382 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=1b8e2784650dc0301efc76b7e2fa617b, UNASSIGN}] 2023-07-22 18:11:30,385 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=1b8e2784650dc0301efc76b7e2fa617b, UNASSIGN 2023-07-22 18:11:30,386 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=1b8e2784650dc0301efc76b7e2fa617b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:30,386 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690049490386"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049490386"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049490386"}]},"ts":"1690049490386"} 2023-07-22 18:11:30,387 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE; CloseRegionProcedure 1b8e2784650dc0301efc76b7e2fa617b, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:30,475 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-22 18:11:30,541 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:30,543 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1b8e2784650dc0301efc76b7e2fa617b, disabling compactions & flushes 2023-07-22 18:11:30,543 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:30,543 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:30,543 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. after waiting 0 ms 2023-07-22 18:11:30,543 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:30,548 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-22 18:11:30,549 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b. 2023-07-22 18:11:30,549 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1b8e2784650dc0301efc76b7e2fa617b: 2023-07-22 18:11:30,551 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:30,551 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=1b8e2784650dc0301efc76b7e2fa617b, regionState=CLOSED 2023-07-22 18:11:30,551 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1690049490551"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049490551"}]},"ts":"1690049490551"} 2023-07-22 18:11:30,555 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-22 18:11:30,555 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; CloseRegionProcedure 1b8e2784650dc0301efc76b7e2fa617b, server=jenkins-hbase4.apache.org,45471,1690049478954 in 166 msec 2023-07-22 18:11:30,558 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-22 18:11:30,558 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=1b8e2784650dc0301efc76b7e2fa617b, UNASSIGN in 173 msec 2023-07-22 18:11:30,559 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049490558"}]},"ts":"1690049490558"} 2023-07-22 18:11:30,560 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-22 18:11:30,561 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-22 18:11:30,563 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 193 msec 2023-07-22 18:11:30,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-22 18:11:30,677 INFO [Listener at localhost/37829] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-22 18:11:30,678 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-22 18:11:30,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-22 18:11:30,681 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-22 18:11:30,681 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-22 18:11:30,685 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=93, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-22 18:11:30,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:30,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:30,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:30,689 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:30,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-22 18:11:30,691 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b/f, FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b/recovered.edits] 2023-07-22 18:11:30,696 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b/recovered.edits/10.seqid to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/archive/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b/recovered.edits/10.seqid 2023-07-22 18:11:30,697 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testFailRemoveGroup/1b8e2784650dc0301efc76b7e2fa617b 2023-07-22 18:11:30,697 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-22 18:11:30,705 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=93, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-22 18:11:30,708 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-22 18:11:30,711 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-22 18:11:30,712 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=93, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-22 18:11:30,712 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-22 18:11:30,712 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049490712"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:30,714 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-22 18:11:30,714 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 1b8e2784650dc0301efc76b7e2fa617b, NAME => 'Group_testFailRemoveGroup,,1690049487565.1b8e2784650dc0301efc76b7e2fa617b.', STARTKEY => '', ENDKEY => ''}] 2023-07-22 18:11:30,714 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-22 18:11:30,714 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690049490714"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:30,716 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-22 18:11:30,722 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=93, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-22 18:11:30,723 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 44 msec 2023-07-22 18:11:30,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-22 18:11:30,792 INFO [Listener at localhost/37829] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-22 18:11:30,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:30,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:30,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:30,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:30,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:30,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:30,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:30,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:30,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:30,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:30,809 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:30,812 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:30,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:30,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:30,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:30,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:30,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:30,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:30,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:30,824 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40289] to rsgroup master 2023-07-22 18:11:30,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:30,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 342 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34802 deadline: 1690050690824, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. 2023-07-22 18:11:30,824 WARN [Listener at localhost/37829] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:30,826 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:30,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:30,827 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:30,827 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:45471], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:30,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:30,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:30,847 INFO [Listener at localhost/37829] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=511 (was 507) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/cluster_3816f725-a9d6-dfeb-f749-0fa8036d8a95/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7988d72f-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7988d72f-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7988d72f-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1070264335_17 at /127.0.0.1:38350 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7988d72f-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1070264335_17 at /127.0.0.1:38374 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7988d72f-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_276401251_17 at /127.0.0.1:59914 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/cluster_3816f725-a9d6-dfeb-f749-0fa8036d8a95/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7988d72f-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x5a2c0b37-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_276401251_17 at /127.0.0.1:47058 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=811 (was 809) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=416 (was 400) - SystemLoadAverage LEAK? -, ProcessCount=174 (was 174), AvailableMemoryMB=6407 (was 6446) 2023-07-22 18:11:30,847 WARN [Listener at localhost/37829] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-22 18:11:30,870 INFO [Listener at localhost/37829] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=511, OpenFileDescriptor=811, MaxFileDescriptor=60000, SystemLoadAverage=416, ProcessCount=174, AvailableMemoryMB=6406 2023-07-22 18:11:30,870 WARN [Listener at localhost/37829] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-22 18:11:30,870 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-22 18:11:30,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:30,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:30,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:30,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:30,876 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:30,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:30,877 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:30,878 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:30,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:30,882 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:30,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:30,887 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:30,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:30,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:30,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:30,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:30,897 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:30,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:30,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:30,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40289] to rsgroup master 2023-07-22 18:11:30,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:30,903 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 370 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34802 deadline: 1690050690902, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. 2023-07-22 18:11:30,903 WARN [Listener at localhost/37829] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:30,908 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:30,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:30,909 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:30,909 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:45471], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:30,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:30,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:30,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:30,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:30,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_1632491602 2023-07-22 18:11:30,914 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1632491602 2023-07-22 18:11:30,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:30,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:30,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:30,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:30,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:30,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:30,924 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33411] to rsgroup Group_testMultiTableMove_1632491602 2023-07-22 18:11:30,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1632491602 2023-07-22 18:11:30,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:30,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:30,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:30,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-22 18:11:30,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33411,1690049473844] are moved back to default 2023-07-22 18:11:30,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_1632491602 2023-07-22 18:11:30,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:30,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:30,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:30,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1632491602 2023-07-22 18:11:30,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:30,939 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:30,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-22 18:11:30,942 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 18:11:30,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 94 2023-07-22 18:11:30,945 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1632491602 2023-07-22 18:11:30,945 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:30,946 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:30,946 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:30,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-22 18:11:30,951 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 18:11:30,954 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:30,955 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e empty. 2023-07-22 18:11:30,955 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:30,956 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-22 18:11:30,972 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:30,974 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => e1cd2bd9d9f8927d50a2b83b9312998e, NAME => 'GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:31,002 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:31,002 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing e1cd2bd9d9f8927d50a2b83b9312998e, disabling compactions & flushes 2023-07-22 18:11:31,002 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. 2023-07-22 18:11:31,003 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. 2023-07-22 18:11:31,003 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. after waiting 0 ms 2023-07-22 18:11:31,003 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. 2023-07-22 18:11:31,003 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. 2023-07-22 18:11:31,003 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for e1cd2bd9d9f8927d50a2b83b9312998e: 2023-07-22 18:11:31,006 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 18:11:31,007 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690049491007"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049491007"}]},"ts":"1690049491007"} 2023-07-22 18:11:31,009 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 18:11:31,011 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 18:11:31,011 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049491011"}]},"ts":"1690049491011"} 2023-07-22 18:11:31,013 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-22 18:11:31,017 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:31,017 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:31,017 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:31,017 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:31,017 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:31,018 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e1cd2bd9d9f8927d50a2b83b9312998e, ASSIGN}] 2023-07-22 18:11:31,020 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e1cd2bd9d9f8927d50a2b83b9312998e, ASSIGN 2023-07-22 18:11:31,021 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e1cd2bd9d9f8927d50a2b83b9312998e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38977,1690049474061; forceNewPlan=false, retain=false 2023-07-22 18:11:31,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-22 18:11:31,171 INFO [jenkins-hbase4:40289] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 18:11:31,173 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=e1cd2bd9d9f8927d50a2b83b9312998e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:31,173 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690049491173"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049491173"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049491173"}]},"ts":"1690049491173"} 2023-07-22 18:11:31,175 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure e1cd2bd9d9f8927d50a2b83b9312998e, server=jenkins-hbase4.apache.org,38977,1690049474061}] 2023-07-22 18:11:31,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-22 18:11:31,331 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. 2023-07-22 18:11:31,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e1cd2bd9d9f8927d50a2b83b9312998e, NAME => 'GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:31,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:31,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:31,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:31,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:31,333 INFO [StoreOpener-e1cd2bd9d9f8927d50a2b83b9312998e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:31,335 DEBUG [StoreOpener-e1cd2bd9d9f8927d50a2b83b9312998e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e/f 2023-07-22 18:11:31,336 DEBUG [StoreOpener-e1cd2bd9d9f8927d50a2b83b9312998e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e/f 2023-07-22 18:11:31,336 INFO [StoreOpener-e1cd2bd9d9f8927d50a2b83b9312998e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e1cd2bd9d9f8927d50a2b83b9312998e columnFamilyName f 2023-07-22 18:11:31,337 INFO [StoreOpener-e1cd2bd9d9f8927d50a2b83b9312998e-1] regionserver.HStore(310): Store=e1cd2bd9d9f8927d50a2b83b9312998e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:31,338 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:31,339 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:31,342 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:31,344 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:31,345 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e1cd2bd9d9f8927d50a2b83b9312998e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11281366240, jitterRate=0.05065910518169403}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:31,345 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e1cd2bd9d9f8927d50a2b83b9312998e: 2023-07-22 18:11:31,346 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e., pid=96, masterSystemTime=1690049491327 2023-07-22 18:11:31,347 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. 2023-07-22 18:11:31,347 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. 2023-07-22 18:11:31,348 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=e1cd2bd9d9f8927d50a2b83b9312998e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:31,348 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690049491348"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049491348"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049491348"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049491348"}]},"ts":"1690049491348"} 2023-07-22 18:11:31,352 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-22 18:11:31,352 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure e1cd2bd9d9f8927d50a2b83b9312998e, server=jenkins-hbase4.apache.org,38977,1690049474061 in 175 msec 2023-07-22 18:11:31,353 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-22 18:11:31,354 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e1cd2bd9d9f8927d50a2b83b9312998e, ASSIGN in 334 msec 2023-07-22 18:11:31,354 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 18:11:31,354 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049491354"}]},"ts":"1690049491354"} 2023-07-22 18:11:31,355 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-22 18:11:31,358 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 18:11:31,363 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 419 msec 2023-07-22 18:11:31,435 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-22 18:11:31,551 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-22 18:11:31,551 INFO [Listener at localhost/37829] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 94 completed 2023-07-22 18:11:31,552 DEBUG [Listener at localhost/37829] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-22 18:11:31,552 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:31,557 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-22 18:11:31,558 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:31,558 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-22 18:11:31,560 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:31,561 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-22 18:11:31,563 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 18:11:31,564 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 97 2023-07-22 18:11:31,564 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-22 18:11:31,566 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1632491602 2023-07-22 18:11:31,566 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:31,567 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:31,567 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:31,569 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 18:11:31,571 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577 2023-07-22 18:11:31,571 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577 empty. 2023-07-22 18:11:31,572 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577 2023-07-22 18:11:31,572 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-22 18:11:31,600 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:31,602 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5762a1ca72da123e248a540b39380577, NAME => 'GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:31,640 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:31,640 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 5762a1ca72da123e248a540b39380577, disabling compactions & flushes 2023-07-22 18:11:31,640 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. 2023-07-22 18:11:31,640 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. 2023-07-22 18:11:31,640 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. after waiting 0 ms 2023-07-22 18:11:31,640 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. 2023-07-22 18:11:31,640 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. 2023-07-22 18:11:31,640 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 5762a1ca72da123e248a540b39380577: 2023-07-22 18:11:31,643 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 18:11:31,644 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690049491644"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049491644"}]},"ts":"1690049491644"} 2023-07-22 18:11:31,646 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 18:11:31,647 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 18:11:31,647 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049491647"}]},"ts":"1690049491647"} 2023-07-22 18:11:31,651 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-22 18:11:31,655 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:31,655 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:31,655 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:31,655 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:31,655 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:31,655 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5762a1ca72da123e248a540b39380577, ASSIGN}] 2023-07-22 18:11:31,657 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5762a1ca72da123e248a540b39380577, ASSIGN 2023-07-22 18:11:31,658 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5762a1ca72da123e248a540b39380577, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38507,1690049474291; forceNewPlan=false, retain=false 2023-07-22 18:11:31,665 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-22 18:11:31,808 INFO [jenkins-hbase4:40289] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 18:11:31,810 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=5762a1ca72da123e248a540b39380577, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:31,810 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690049491810"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049491810"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049491810"}]},"ts":"1690049491810"} 2023-07-22 18:11:31,812 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 5762a1ca72da123e248a540b39380577, server=jenkins-hbase4.apache.org,38507,1690049474291}] 2023-07-22 18:11:31,866 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-22 18:11:31,968 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. 2023-07-22 18:11:31,968 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5762a1ca72da123e248a540b39380577, NAME => 'GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:31,968 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 5762a1ca72da123e248a540b39380577 2023-07-22 18:11:31,968 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:31,968 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5762a1ca72da123e248a540b39380577 2023-07-22 18:11:31,968 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5762a1ca72da123e248a540b39380577 2023-07-22 18:11:31,970 INFO [StoreOpener-5762a1ca72da123e248a540b39380577-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5762a1ca72da123e248a540b39380577 2023-07-22 18:11:31,971 DEBUG [StoreOpener-5762a1ca72da123e248a540b39380577-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577/f 2023-07-22 18:11:31,971 DEBUG [StoreOpener-5762a1ca72da123e248a540b39380577-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577/f 2023-07-22 18:11:31,972 INFO [StoreOpener-5762a1ca72da123e248a540b39380577-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5762a1ca72da123e248a540b39380577 columnFamilyName f 2023-07-22 18:11:31,972 INFO [StoreOpener-5762a1ca72da123e248a540b39380577-1] regionserver.HStore(310): Store=5762a1ca72da123e248a540b39380577/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:31,973 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577 2023-07-22 18:11:31,973 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577 2023-07-22 18:11:31,976 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5762a1ca72da123e248a540b39380577 2023-07-22 18:11:31,978 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:31,979 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5762a1ca72da123e248a540b39380577; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9883670240, jitterRate=-0.07951147854328156}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:31,979 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5762a1ca72da123e248a540b39380577: 2023-07-22 18:11:31,980 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577., pid=99, masterSystemTime=1690049491964 2023-07-22 18:11:31,981 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. 2023-07-22 18:11:31,981 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. 2023-07-22 18:11:31,981 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=5762a1ca72da123e248a540b39380577, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:31,982 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690049491981"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049491981"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049491981"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049491981"}]},"ts":"1690049491981"} 2023-07-22 18:11:31,985 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-22 18:11:31,985 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 5762a1ca72da123e248a540b39380577, server=jenkins-hbase4.apache.org,38507,1690049474291 in 171 msec 2023-07-22 18:11:31,987 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-22 18:11:31,987 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5762a1ca72da123e248a540b39380577, ASSIGN in 330 msec 2023-07-22 18:11:31,987 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 18:11:31,988 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049491987"}]},"ts":"1690049491987"} 2023-07-22 18:11:31,989 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-22 18:11:31,991 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 18:11:31,992 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 431 msec 2023-07-22 18:11:32,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-22 18:11:32,168 INFO [Listener at localhost/37829] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 97 completed 2023-07-22 18:11:32,168 DEBUG [Listener at localhost/37829] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-22 18:11:32,168 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:32,174 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-22 18:11:32,174 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:32,174 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-22 18:11:32,175 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:32,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-22 18:11:32,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 18:11:32,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-22 18:11:32,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 18:11:32,190 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_1632491602 2023-07-22 18:11:32,193 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_1632491602 2023-07-22 18:11:32,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1632491602 2023-07-22 18:11:32,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:32,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:32,197 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:32,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_1632491602 2023-07-22 18:11:32,198 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(345): Moving region 5762a1ca72da123e248a540b39380577 to RSGroup Group_testMultiTableMove_1632491602 2023-07-22 18:11:32,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5762a1ca72da123e248a540b39380577, REOPEN/MOVE 2023-07-22 18:11:32,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_1632491602 2023-07-22 18:11:32,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(345): Moving region e1cd2bd9d9f8927d50a2b83b9312998e to RSGroup Group_testMultiTableMove_1632491602 2023-07-22 18:11:32,201 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5762a1ca72da123e248a540b39380577, REOPEN/MOVE 2023-07-22 18:11:32,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e1cd2bd9d9f8927d50a2b83b9312998e, REOPEN/MOVE 2023-07-22 18:11:32,202 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=5762a1ca72da123e248a540b39380577, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:32,203 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e1cd2bd9d9f8927d50a2b83b9312998e, REOPEN/MOVE 2023-07-22 18:11:32,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_1632491602, current retry=0 2023-07-22 18:11:32,203 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690049492202"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049492202"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049492202"}]},"ts":"1690049492202"} 2023-07-22 18:11:32,204 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=e1cd2bd9d9f8927d50a2b83b9312998e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:32,204 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690049492204"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049492204"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049492204"}]},"ts":"1690049492204"} 2023-07-22 18:11:32,205 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=100, state=RUNNABLE; CloseRegionProcedure 5762a1ca72da123e248a540b39380577, server=jenkins-hbase4.apache.org,38507,1690049474291}] 2023-07-22 18:11:32,206 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=101, state=RUNNABLE; CloseRegionProcedure e1cd2bd9d9f8927d50a2b83b9312998e, server=jenkins-hbase4.apache.org,38977,1690049474061}] 2023-07-22 18:11:32,358 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5762a1ca72da123e248a540b39380577 2023-07-22 18:11:32,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5762a1ca72da123e248a540b39380577, disabling compactions & flushes 2023-07-22 18:11:32,359 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. 2023-07-22 18:11:32,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. 2023-07-22 18:11:32,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. after waiting 0 ms 2023-07-22 18:11:32,359 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. 2023-07-22 18:11:32,359 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:32,360 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e1cd2bd9d9f8927d50a2b83b9312998e, disabling compactions & flushes 2023-07-22 18:11:32,361 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. 2023-07-22 18:11:32,361 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. 2023-07-22 18:11:32,361 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. after waiting 0 ms 2023-07-22 18:11:32,361 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. 2023-07-22 18:11:32,365 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:32,541 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. 2023-07-22 18:11:32,541 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5762a1ca72da123e248a540b39380577: 2023-07-22 18:11:32,541 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 5762a1ca72da123e248a540b39380577 move to jenkins-hbase4.apache.org,33411,1690049473844 record at close sequenceid=2 2023-07-22 18:11:32,542 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:32,543 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. 2023-07-22 18:11:32,543 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e1cd2bd9d9f8927d50a2b83b9312998e: 2023-07-22 18:11:32,543 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding e1cd2bd9d9f8927d50a2b83b9312998e move to jenkins-hbase4.apache.org,33411,1690049473844 record at close sequenceid=2 2023-07-22 18:11:32,545 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5762a1ca72da123e248a540b39380577 2023-07-22 18:11:32,546 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=5762a1ca72da123e248a540b39380577, regionState=CLOSED 2023-07-22 18:11:32,546 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690049492546"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049492546"}]},"ts":"1690049492546"} 2023-07-22 18:11:32,548 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:32,548 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=e1cd2bd9d9f8927d50a2b83b9312998e, regionState=CLOSED 2023-07-22 18:11:32,548 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690049492548"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049492548"}]},"ts":"1690049492548"} 2023-07-22 18:11:32,550 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=100 2023-07-22 18:11:32,550 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=100, state=SUCCESS; CloseRegionProcedure 5762a1ca72da123e248a540b39380577, server=jenkins-hbase4.apache.org,38507,1690049474291 in 343 msec 2023-07-22 18:11:32,551 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5762a1ca72da123e248a540b39380577, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33411,1690049473844; forceNewPlan=false, retain=false 2023-07-22 18:11:32,554 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=101 2023-07-22 18:11:32,554 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=101, state=SUCCESS; CloseRegionProcedure e1cd2bd9d9f8927d50a2b83b9312998e, server=jenkins-hbase4.apache.org,38977,1690049474061 in 347 msec 2023-07-22 18:11:32,555 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e1cd2bd9d9f8927d50a2b83b9312998e, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33411,1690049473844; forceNewPlan=false, retain=false 2023-07-22 18:11:32,702 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=5762a1ca72da123e248a540b39380577, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:32,702 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=e1cd2bd9d9f8927d50a2b83b9312998e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:32,702 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690049492701"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049492701"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049492701"}]},"ts":"1690049492701"} 2023-07-22 18:11:32,702 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690049492701"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049492701"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049492701"}]},"ts":"1690049492701"} 2023-07-22 18:11:32,704 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=100, state=RUNNABLE; OpenRegionProcedure 5762a1ca72da123e248a540b39380577, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:32,704 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=101, state=RUNNABLE; OpenRegionProcedure e1cd2bd9d9f8927d50a2b83b9312998e, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:32,860 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. 2023-07-22 18:11:32,860 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5762a1ca72da123e248a540b39380577, NAME => 'GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:32,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 5762a1ca72da123e248a540b39380577 2023-07-22 18:11:32,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:32,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5762a1ca72da123e248a540b39380577 2023-07-22 18:11:32,861 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5762a1ca72da123e248a540b39380577 2023-07-22 18:11:32,867 INFO [StoreOpener-5762a1ca72da123e248a540b39380577-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5762a1ca72da123e248a540b39380577 2023-07-22 18:11:32,868 DEBUG [StoreOpener-5762a1ca72da123e248a540b39380577-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577/f 2023-07-22 18:11:32,868 DEBUG [StoreOpener-5762a1ca72da123e248a540b39380577-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577/f 2023-07-22 18:11:32,868 INFO [StoreOpener-5762a1ca72da123e248a540b39380577-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5762a1ca72da123e248a540b39380577 columnFamilyName f 2023-07-22 18:11:32,869 INFO [StoreOpener-5762a1ca72da123e248a540b39380577-1] regionserver.HStore(310): Store=5762a1ca72da123e248a540b39380577/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:32,869 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577 2023-07-22 18:11:32,871 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577 2023-07-22 18:11:32,874 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5762a1ca72da123e248a540b39380577 2023-07-22 18:11:32,875 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5762a1ca72da123e248a540b39380577; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10288465920, jitterRate=-0.04181194305419922}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:32,875 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5762a1ca72da123e248a540b39380577: 2023-07-22 18:11:32,876 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577., pid=104, masterSystemTime=1690049492856 2023-07-22 18:11:32,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. 2023-07-22 18:11:32,877 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. 2023-07-22 18:11:32,877 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. 2023-07-22 18:11:32,877 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e1cd2bd9d9f8927d50a2b83b9312998e, NAME => 'GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:32,878 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=5762a1ca72da123e248a540b39380577, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:32,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:32,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:32,878 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690049492878"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049492878"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049492878"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049492878"}]},"ts":"1690049492878"} 2023-07-22 18:11:32,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:32,878 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:32,881 INFO [StoreOpener-e1cd2bd9d9f8927d50a2b83b9312998e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:32,882 DEBUG [StoreOpener-e1cd2bd9d9f8927d50a2b83b9312998e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e/f 2023-07-22 18:11:32,882 DEBUG [StoreOpener-e1cd2bd9d9f8927d50a2b83b9312998e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e/f 2023-07-22 18:11:32,882 INFO [StoreOpener-e1cd2bd9d9f8927d50a2b83b9312998e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e1cd2bd9d9f8927d50a2b83b9312998e columnFamilyName f 2023-07-22 18:11:32,883 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=100 2023-07-22 18:11:32,883 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=100, state=SUCCESS; OpenRegionProcedure 5762a1ca72da123e248a540b39380577, server=jenkins-hbase4.apache.org,33411,1690049473844 in 177 msec 2023-07-22 18:11:32,883 INFO [StoreOpener-e1cd2bd9d9f8927d50a2b83b9312998e-1] regionserver.HStore(310): Store=e1cd2bd9d9f8927d50a2b83b9312998e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:32,884 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:32,884 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5762a1ca72da123e248a540b39380577, REOPEN/MOVE in 685 msec 2023-07-22 18:11:32,885 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:32,889 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:32,890 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e1cd2bd9d9f8927d50a2b83b9312998e; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11796992640, jitterRate=0.09868055582046509}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:32,890 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e1cd2bd9d9f8927d50a2b83b9312998e: 2023-07-22 18:11:32,890 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e., pid=105, masterSystemTime=1690049492856 2023-07-22 18:11:32,891 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. 2023-07-22 18:11:32,891 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. 2023-07-22 18:11:32,892 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=e1cd2bd9d9f8927d50a2b83b9312998e, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:32,892 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690049492892"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049492892"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049492892"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049492892"}]},"ts":"1690049492892"} 2023-07-22 18:11:32,895 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=101 2023-07-22 18:11:32,895 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=101, state=SUCCESS; OpenRegionProcedure e1cd2bd9d9f8927d50a2b83b9312998e, server=jenkins-hbase4.apache.org,33411,1690049473844 in 189 msec 2023-07-22 18:11:32,896 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e1cd2bd9d9f8927d50a2b83b9312998e, REOPEN/MOVE in 696 msec 2023-07-22 18:11:33,203 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure.ProcedureSyncWait(216): waitFor pid=100 2023-07-22 18:11:33,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_1632491602. 2023-07-22 18:11:33,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:33,207 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:33,208 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:33,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-22 18:11:33,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 18:11:33,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-22 18:11:33,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 18:11:33,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:33,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:33,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_1632491602 2023-07-22 18:11:33,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:33,214 INFO [Listener at localhost/37829] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-22 18:11:33,215 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-22 18:11:33,215 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-22 18:11:33,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-22 18:11:33,218 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049493218"}]},"ts":"1690049493218"} 2023-07-22 18:11:33,219 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-22 18:11:33,221 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-22 18:11:33,224 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e1cd2bd9d9f8927d50a2b83b9312998e, UNASSIGN}] 2023-07-22 18:11:33,225 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e1cd2bd9d9f8927d50a2b83b9312998e, UNASSIGN 2023-07-22 18:11:33,226 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=e1cd2bd9d9f8927d50a2b83b9312998e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:33,226 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690049493226"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049493226"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049493226"}]},"ts":"1690049493226"} 2023-07-22 18:11:33,227 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE; CloseRegionProcedure e1cd2bd9d9f8927d50a2b83b9312998e, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:33,319 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-22 18:11:33,379 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:33,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e1cd2bd9d9f8927d50a2b83b9312998e, disabling compactions & flushes 2023-07-22 18:11:33,380 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. 2023-07-22 18:11:33,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. 2023-07-22 18:11:33,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. after waiting 0 ms 2023-07-22 18:11:33,380 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. 2023-07-22 18:11:33,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 18:11:33,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e. 2023-07-22 18:11:33,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e1cd2bd9d9f8927d50a2b83b9312998e: 2023-07-22 18:11:33,387 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:33,388 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=e1cd2bd9d9f8927d50a2b83b9312998e, regionState=CLOSED 2023-07-22 18:11:33,388 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690049493388"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049493388"}]},"ts":"1690049493388"} 2023-07-22 18:11:33,392 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-22 18:11:33,392 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; CloseRegionProcedure e1cd2bd9d9f8927d50a2b83b9312998e, server=jenkins-hbase4.apache.org,33411,1690049473844 in 162 msec 2023-07-22 18:11:33,394 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-22 18:11:33,394 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=e1cd2bd9d9f8927d50a2b83b9312998e, UNASSIGN in 170 msec 2023-07-22 18:11:33,395 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049493395"}]},"ts":"1690049493395"} 2023-07-22 18:11:33,396 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-22 18:11:33,398 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-22 18:11:33,400 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 184 msec 2023-07-22 18:11:33,454 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-22 18:11:33,521 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-22 18:11:33,521 INFO [Listener at localhost/37829] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-22 18:11:33,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-22 18:11:33,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-22 18:11:33,525 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-22 18:11:33,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_1632491602' 2023-07-22 18:11:33,526 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=109, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-22 18:11:33,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1632491602 2023-07-22 18:11:33,529 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:33,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:33,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:33,531 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:33,534 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e/f, FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e/recovered.edits] 2023-07-22 18:11:33,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-22 18:11:33,543 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e/recovered.edits/7.seqid to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/archive/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e/recovered.edits/7.seqid 2023-07-22 18:11:33,543 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/GrouptestMultiTableMoveA/e1cd2bd9d9f8927d50a2b83b9312998e 2023-07-22 18:11:33,543 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-22 18:11:33,548 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=109, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-22 18:11:33,551 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-22 18:11:33,553 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-22 18:11:33,554 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=109, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-22 18:11:33,554 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-22 18:11:33,555 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049493555"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:33,557 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-22 18:11:33,557 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => e1cd2bd9d9f8927d50a2b83b9312998e, NAME => 'GrouptestMultiTableMoveA,,1690049490939.e1cd2bd9d9f8927d50a2b83b9312998e.', STARTKEY => '', ENDKEY => ''}] 2023-07-22 18:11:33,557 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-22 18:11:33,557 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690049493557"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:33,559 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-22 18:11:33,561 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(130): Finished pid=109, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-22 18:11:33,563 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 39 msec 2023-07-22 18:11:33,635 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-22 18:11:33,636 INFO [Listener at localhost/37829] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-22 18:11:33,636 INFO [Listener at localhost/37829] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-22 18:11:33,637 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-22 18:11:33,637 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-22 18:11:33,640 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-22 18:11:33,640 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049493640"}]},"ts":"1690049493640"} 2023-07-22 18:11:33,642 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-22 18:11:33,645 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-22 18:11:33,646 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5762a1ca72da123e248a540b39380577, UNASSIGN}] 2023-07-22 18:11:33,648 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5762a1ca72da123e248a540b39380577, UNASSIGN 2023-07-22 18:11:33,648 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=5762a1ca72da123e248a540b39380577, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:33,649 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690049493648"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049493648"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049493648"}]},"ts":"1690049493648"} 2023-07-22 18:11:33,650 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure 5762a1ca72da123e248a540b39380577, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:33,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-22 18:11:33,803 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5762a1ca72da123e248a540b39380577 2023-07-22 18:11:33,804 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5762a1ca72da123e248a540b39380577, disabling compactions & flushes 2023-07-22 18:11:33,804 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. 2023-07-22 18:11:33,804 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. 2023-07-22 18:11:33,804 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. after waiting 0 ms 2023-07-22 18:11:33,804 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. 2023-07-22 18:11:33,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 18:11:33,809 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577. 2023-07-22 18:11:33,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5762a1ca72da123e248a540b39380577: 2023-07-22 18:11:33,811 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5762a1ca72da123e248a540b39380577 2023-07-22 18:11:33,811 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=5762a1ca72da123e248a540b39380577, regionState=CLOSED 2023-07-22 18:11:33,811 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1690049493811"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049493811"}]},"ts":"1690049493811"} 2023-07-22 18:11:33,814 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-22 18:11:33,814 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure 5762a1ca72da123e248a540b39380577, server=jenkins-hbase4.apache.org,33411,1690049473844 in 163 msec 2023-07-22 18:11:33,815 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-22 18:11:33,816 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=5762a1ca72da123e248a540b39380577, UNASSIGN in 168 msec 2023-07-22 18:11:33,816 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049493816"}]},"ts":"1690049493816"} 2023-07-22 18:11:33,817 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-22 18:11:33,820 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-22 18:11:33,822 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 184 msec 2023-07-22 18:11:33,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-22 18:11:33,943 INFO [Listener at localhost/37829] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-22 18:11:33,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-22 18:11:33,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-22 18:11:33,946 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-22 18:11:33,946 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_1632491602' 2023-07-22 18:11:33,947 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-22 18:11:33,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1632491602 2023-07-22 18:11:33,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:33,950 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:33,951 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577 2023-07-22 18:11:33,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:33,955 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577/f, FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577/recovered.edits] 2023-07-22 18:11:33,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-22 18:11:33,962 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577/recovered.edits/7.seqid to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/archive/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577/recovered.edits/7.seqid 2023-07-22 18:11:33,963 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/GrouptestMultiTableMoveB/5762a1ca72da123e248a540b39380577 2023-07-22 18:11:33,963 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-22 18:11:33,966 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-22 18:11:33,968 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-22 18:11:33,969 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-22 18:11:33,970 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-22 18:11:33,970 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-22 18:11:33,970 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049493970"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:33,972 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-22 18:11:33,972 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 5762a1ca72da123e248a540b39380577, NAME => 'GrouptestMultiTableMoveB,,1690049491559.5762a1ca72da123e248a540b39380577.', STARTKEY => '', ENDKEY => ''}] 2023-07-22 18:11:33,972 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-22 18:11:33,972 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690049493972"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:33,973 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-22 18:11:33,975 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-22 18:11:33,976 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 31 msec 2023-07-22 18:11:34,060 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-22 18:11:34,060 INFO [Listener at localhost/37829] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-22 18:11:34,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:34,064 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:34,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:34,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:34,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:34,066 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33411] to rsgroup default 2023-07-22 18:11:34,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_1632491602 2023-07-22 18:11:34,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:34,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:34,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_1632491602, current retry=0 2023-07-22 18:11:34,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33411,1690049473844] are moved back to Group_testMultiTableMove_1632491602 2023-07-22 18:11:34,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_1632491602 => default 2023-07-22 18:11:34,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:34,076 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_1632491602 2023-07-22 18:11:34,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,084 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:34,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-22 18:11:34,086 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:34,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:34,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:34,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:34,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:34,088 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:34,089 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:34,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,093 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:34,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:34,097 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:34,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:34,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,099 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:34,101 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:34,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:34,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:34,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:34,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40289] to rsgroup master 2023-07-22 18:11:34,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:34,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 508 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34802 deadline: 1690050694109, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. 2023-07-22 18:11:34,110 WARN [Listener at localhost/37829] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:34,111 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:34,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:34,112 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:34,112 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:45471], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:34,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:34,113 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:34,133 INFO [Listener at localhost/37829] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=510 (was 511), OpenFileDescriptor=801 (was 811), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=416 (was 416), ProcessCount=174 (was 174), AvailableMemoryMB=6217 (was 6406) 2023-07-22 18:11:34,133 WARN [Listener at localhost/37829] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-22 18:11:34,149 INFO [Listener at localhost/37829] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=510, OpenFileDescriptor=801, MaxFileDescriptor=60000, SystemLoadAverage=416, ProcessCount=174, AvailableMemoryMB=6216 2023-07-22 18:11:34,150 WARN [Listener at localhost/37829] hbase.ResourceChecker(130): Thread=510 is superior to 500 2023-07-22 18:11:34,150 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-22 18:11:34,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:34,154 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:34,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:34,155 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:34,155 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:34,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:34,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:34,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:34,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,160 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:34,161 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:34,164 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:34,164 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:34,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:34,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:34,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:34,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:34,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:34,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40289] to rsgroup master 2023-07-22 18:11:34,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:34,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 536 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34802 deadline: 1690050694173, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. 2023-07-22 18:11:34,174 WARN [Listener at localhost/37829] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:34,175 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:34,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:34,176 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:34,176 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:45471], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:34,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:34,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:34,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:34,177 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:34,178 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-22 18:11:34,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,180 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-22 18:11:34,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:34,183 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:34,186 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:34,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:34,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:34,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507] to rsgroup oldGroup 2023-07-22 18:11:34,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-22 18:11:34,194 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:34,195 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:34,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-22 18:11:34,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33411,1690049473844, jenkins-hbase4.apache.org,38507,1690049474291] are moved back to default 2023-07-22 18:11:34,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-22 18:11:34,196 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:34,199 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:34,200 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:34,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-22 18:11:34,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:34,202 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-22 18:11:34,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:34,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:34,203 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:34,204 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-22 18:11:34,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,206 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-22 18:11:34,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-22 18:11:34,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:34,208 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 18:11:34,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:34,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:34,216 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:34,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38977] to rsgroup anotherRSGroup 2023-07-22 18:11:34,220 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-22 18:11:34,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-22 18:11:34,221 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:34,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 18:11:34,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-22 18:11:34,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38977,1690049474061] are moved back to default 2023-07-22 18:11:34,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-22 18:11:34,223 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:34,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:34,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:34,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-22 18:11:34,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:34,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-22 18:11:34,228 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:34,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-22 18:11:34,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:34,234 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 570 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:34802 deadline: 1690050694233, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-22 18:11:34,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-22 18:11:34,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:34,235 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:34802 deadline: 1690050694235, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-22 18:11:34,236 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-22 18:11:34,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:34,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:34802 deadline: 1690050694236, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-22 18:11:34,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-22 18:11:34,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:34,237 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:34802 deadline: 1690050694237, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-22 18:11:34,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:34,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:34,241 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:34,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:34,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:34,242 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38977] to rsgroup default 2023-07-22 18:11:34,245 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-22 18:11:34,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-22 18:11:34,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:34,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 18:11:34,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-22 18:11:34,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38977,1690049474061] are moved back to anotherRSGroup 2023-07-22 18:11:34,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-22 18:11:34,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:34,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-22 18:11:34,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-22 18:11:34,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:34,254 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-22 18:11:34,261 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:34,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:34,262 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:34,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:34,263 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507] to rsgroup default 2023-07-22 18:11:34,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,265 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-22 18:11:34,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:34,266 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:34,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-22 18:11:34,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33411,1690049473844, jenkins-hbase4.apache.org,38507,1690049474291] are moved back to oldGroup 2023-07-22 18:11:34,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-22 18:11:34,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:34,269 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-22 18:11:34,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:34,273 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-22 18:11:34,275 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:34,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:34,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:34,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:34,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:34,276 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:34,277 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:34,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:34,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:34,286 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:34,287 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:34,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:34,290 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:34,291 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:34,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:34,295 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:34,297 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40289] to rsgroup master 2023-07-22 18:11:34,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:34,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 612 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34802 deadline: 1690050694297, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. 2023-07-22 18:11:34,298 WARN [Listener at localhost/37829] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:34,300 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:34,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:34,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:34,301 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:45471], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:34,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:34,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:34,324 INFO [Listener at localhost/37829] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=514 (was 510) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=801 (was 801), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=416 (was 416), ProcessCount=174 (was 174), AvailableMemoryMB=6214 (was 6216) 2023-07-22 18:11:34,324 WARN [Listener at localhost/37829] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-22 18:11:34,347 INFO [Listener at localhost/37829] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=514, OpenFileDescriptor=801, MaxFileDescriptor=60000, SystemLoadAverage=416, ProcessCount=174, AvailableMemoryMB=6209 2023-07-22 18:11:34,347 WARN [Listener at localhost/37829] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-22 18:11:34,348 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-22 18:11:34,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:34,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:34,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:34,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:34,353 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:34,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:34,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:34,355 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:34,359 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,360 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:34,361 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:34,364 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:34,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:34,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:34,370 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:34,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:34,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:34,374 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:34,375 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40289] to rsgroup master 2023-07-22 18:11:34,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:34,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 640 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34802 deadline: 1690050694375, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. 2023-07-22 18:11:34,376 WARN [Listener at localhost/37829] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:34,378 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:34,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:34,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:34,379 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:45471], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:34,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:34,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:34,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:34,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:34,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-22 18:11:34,383 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-22 18:11:34,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,384 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:34,385 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:34,390 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:34,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:34,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:34,395 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507] to rsgroup oldgroup 2023-07-22 18:11:34,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-22 18:11:34,397 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:34,398 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:34,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-22 18:11:34,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33411,1690049473844, jenkins-hbase4.apache.org,38507,1690049474291] are moved back to default 2023-07-22 18:11:34,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-22 18:11:34,400 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:34,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:34,403 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:34,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-22 18:11:34,405 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:34,407 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:34,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-22 18:11:34,409 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 18:11:34,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 114 2023-07-22 18:11:34,410 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-22 18:11:34,411 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-22 18:11:34,411 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:34,412 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:34,412 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:34,414 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 18:11:34,416 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/testRename/5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:34,416 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/testRename/5ad742ffe0eb92d1c27e3c0036eef8fa empty. 2023-07-22 18:11:34,417 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/testRename/5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:34,417 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-22 18:11:34,432 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:34,433 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5ad742ffe0eb92d1c27e3c0036eef8fa, NAME => 'testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:34,447 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:34,448 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 5ad742ffe0eb92d1c27e3c0036eef8fa, disabling compactions & flushes 2023-07-22 18:11:34,448 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:34,448 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:34,448 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. after waiting 0 ms 2023-07-22 18:11:34,448 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:34,448 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:34,448 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 5ad742ffe0eb92d1c27e3c0036eef8fa: 2023-07-22 18:11:34,450 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 18:11:34,451 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690049494451"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049494451"}]},"ts":"1690049494451"} 2023-07-22 18:11:34,453 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 18:11:34,454 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 18:11:34,454 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049494454"}]},"ts":"1690049494454"} 2023-07-22 18:11:34,455 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-22 18:11:34,459 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:34,459 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:34,459 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:34,459 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:34,460 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=5ad742ffe0eb92d1c27e3c0036eef8fa, ASSIGN}] 2023-07-22 18:11:34,462 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=5ad742ffe0eb92d1c27e3c0036eef8fa, ASSIGN 2023-07-22 18:11:34,462 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=5ad742ffe0eb92d1c27e3c0036eef8fa, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38977,1690049474061; forceNewPlan=false, retain=false 2023-07-22 18:11:34,511 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-22 18:11:34,613 INFO [jenkins-hbase4:40289] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 18:11:34,614 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=5ad742ffe0eb92d1c27e3c0036eef8fa, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:34,614 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690049494614"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049494614"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049494614"}]},"ts":"1690049494614"} 2023-07-22 18:11:34,616 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure 5ad742ffe0eb92d1c27e3c0036eef8fa, server=jenkins-hbase4.apache.org,38977,1690049474061}] 2023-07-22 18:11:34,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-22 18:11:34,771 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:34,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5ad742ffe0eb92d1c27e3c0036eef8fa, NAME => 'testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:34,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:34,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:34,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:34,771 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:34,773 INFO [StoreOpener-5ad742ffe0eb92d1c27e3c0036eef8fa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:34,774 DEBUG [StoreOpener-5ad742ffe0eb92d1c27e3c0036eef8fa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/testRename/5ad742ffe0eb92d1c27e3c0036eef8fa/tr 2023-07-22 18:11:34,774 DEBUG [StoreOpener-5ad742ffe0eb92d1c27e3c0036eef8fa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/testRename/5ad742ffe0eb92d1c27e3c0036eef8fa/tr 2023-07-22 18:11:34,775 INFO [StoreOpener-5ad742ffe0eb92d1c27e3c0036eef8fa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5ad742ffe0eb92d1c27e3c0036eef8fa columnFamilyName tr 2023-07-22 18:11:34,775 INFO [StoreOpener-5ad742ffe0eb92d1c27e3c0036eef8fa-1] regionserver.HStore(310): Store=5ad742ffe0eb92d1c27e3c0036eef8fa/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:34,776 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/testRename/5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:34,777 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/testRename/5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:34,779 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:34,782 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/testRename/5ad742ffe0eb92d1c27e3c0036eef8fa/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:34,782 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5ad742ffe0eb92d1c27e3c0036eef8fa; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9399995520, jitterRate=-0.12455719709396362}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:34,782 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5ad742ffe0eb92d1c27e3c0036eef8fa: 2023-07-22 18:11:34,783 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa., pid=116, masterSystemTime=1690049494767 2023-07-22 18:11:34,784 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:34,784 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:34,785 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=5ad742ffe0eb92d1c27e3c0036eef8fa, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:34,785 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690049494785"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049494785"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049494785"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049494785"}]},"ts":"1690049494785"} 2023-07-22 18:11:34,788 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-22 18:11:34,788 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure 5ad742ffe0eb92d1c27e3c0036eef8fa, server=jenkins-hbase4.apache.org,38977,1690049474061 in 170 msec 2023-07-22 18:11:34,794 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-22 18:11:34,794 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=5ad742ffe0eb92d1c27e3c0036eef8fa, ASSIGN in 329 msec 2023-07-22 18:11:34,795 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 18:11:34,795 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049494795"}]},"ts":"1690049494795"} 2023-07-22 18:11:34,796 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-22 18:11:34,798 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 18:11:34,800 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=testRename in 392 msec 2023-07-22 18:11:35,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-22 18:11:35,013 INFO [Listener at localhost/37829] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 114 completed 2023-07-22 18:11:35,014 DEBUG [Listener at localhost/37829] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-22 18:11:35,014 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:35,017 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-22 18:11:35,017 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:35,018 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-22 18:11:35,019 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-22 18:11:35,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-22 18:11:35,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:35,022 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:35,023 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:35,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-22 18:11:35,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(345): Moving region 5ad742ffe0eb92d1c27e3c0036eef8fa to RSGroup oldgroup 2023-07-22 18:11:35,024 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:35,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:35,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:35,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:35,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:35,025 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=5ad742ffe0eb92d1c27e3c0036eef8fa, REOPEN/MOVE 2023-07-22 18:11:35,025 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-22 18:11:35,026 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=5ad742ffe0eb92d1c27e3c0036eef8fa, REOPEN/MOVE 2023-07-22 18:11:35,026 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=5ad742ffe0eb92d1c27e3c0036eef8fa, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:35,026 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690049495026"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049495026"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049495026"}]},"ts":"1690049495026"} 2023-07-22 18:11:35,028 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure 5ad742ffe0eb92d1c27e3c0036eef8fa, server=jenkins-hbase4.apache.org,38977,1690049474061}] 2023-07-22 18:11:35,181 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:35,182 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5ad742ffe0eb92d1c27e3c0036eef8fa, disabling compactions & flushes 2023-07-22 18:11:35,182 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:35,182 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:35,182 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. after waiting 0 ms 2023-07-22 18:11:35,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:35,188 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/testRename/5ad742ffe0eb92d1c27e3c0036eef8fa/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:35,188 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:35,189 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5ad742ffe0eb92d1c27e3c0036eef8fa: 2023-07-22 18:11:35,189 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 5ad742ffe0eb92d1c27e3c0036eef8fa move to jenkins-hbase4.apache.org,33411,1690049473844 record at close sequenceid=2 2023-07-22 18:11:35,190 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:35,191 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=5ad742ffe0eb92d1c27e3c0036eef8fa, regionState=CLOSED 2023-07-22 18:11:35,191 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690049495191"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049495191"}]},"ts":"1690049495191"} 2023-07-22 18:11:35,197 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-22 18:11:35,197 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure 5ad742ffe0eb92d1c27e3c0036eef8fa, server=jenkins-hbase4.apache.org,38977,1690049474061 in 168 msec 2023-07-22 18:11:35,198 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=5ad742ffe0eb92d1c27e3c0036eef8fa, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33411,1690049473844; forceNewPlan=false, retain=false 2023-07-22 18:11:35,348 INFO [jenkins-hbase4:40289] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 18:11:35,349 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=5ad742ffe0eb92d1c27e3c0036eef8fa, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:35,349 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690049495349"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049495349"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049495349"}]},"ts":"1690049495349"} 2023-07-22 18:11:35,351 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure 5ad742ffe0eb92d1c27e3c0036eef8fa, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:35,507 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:35,507 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5ad742ffe0eb92d1c27e3c0036eef8fa, NAME => 'testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:35,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:35,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:35,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:35,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:35,509 INFO [StoreOpener-5ad742ffe0eb92d1c27e3c0036eef8fa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:35,510 DEBUG [StoreOpener-5ad742ffe0eb92d1c27e3c0036eef8fa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/testRename/5ad742ffe0eb92d1c27e3c0036eef8fa/tr 2023-07-22 18:11:35,510 DEBUG [StoreOpener-5ad742ffe0eb92d1c27e3c0036eef8fa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/testRename/5ad742ffe0eb92d1c27e3c0036eef8fa/tr 2023-07-22 18:11:35,511 INFO [StoreOpener-5ad742ffe0eb92d1c27e3c0036eef8fa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5ad742ffe0eb92d1c27e3c0036eef8fa columnFamilyName tr 2023-07-22 18:11:35,511 INFO [StoreOpener-5ad742ffe0eb92d1c27e3c0036eef8fa-1] regionserver.HStore(310): Store=5ad742ffe0eb92d1c27e3c0036eef8fa/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:35,512 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/testRename/5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:35,513 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/testRename/5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:35,516 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:35,517 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5ad742ffe0eb92d1c27e3c0036eef8fa; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9580638880, jitterRate=-0.10773347318172455}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:35,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5ad742ffe0eb92d1c27e3c0036eef8fa: 2023-07-22 18:11:35,518 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa., pid=119, masterSystemTime=1690049495503 2023-07-22 18:11:35,520 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:35,520 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:35,520 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=5ad742ffe0eb92d1c27e3c0036eef8fa, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:35,521 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690049495520"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049495520"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049495520"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049495520"}]},"ts":"1690049495520"} 2023-07-22 18:11:35,524 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-22 18:11:35,524 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure 5ad742ffe0eb92d1c27e3c0036eef8fa, server=jenkins-hbase4.apache.org,33411,1690049473844 in 171 msec 2023-07-22 18:11:35,526 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=5ad742ffe0eb92d1c27e3c0036eef8fa, REOPEN/MOVE in 499 msec 2023-07-22 18:11:36,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-22 18:11:36,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-22 18:11:36,026 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:36,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:36,029 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:36,032 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:36,032 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-22 18:11:36,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 18:11:36,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-22 18:11:36,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:36,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-22 18:11:36,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 18:11:36,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:36,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:36,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-22 18:11:36,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-22 18:11:36,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-22 18:11:36,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:36,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:36,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 18:11:36,042 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:36,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:36,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:36,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38977] to rsgroup normal 2023-07-22 18:11:36,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-22 18:11:36,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-22 18:11:36,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:36,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:36,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 18:11:36,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-22 18:11:36,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38977,1690049474061] are moved back to default 2023-07-22 18:11:36,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-22 18:11:36,056 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:36,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:36,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:36,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-22 18:11:36,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:36,062 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:36,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-22 18:11:36,065 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 18:11:36,065 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 120 2023-07-22 18:11:36,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-22 18:11:36,067 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-22 18:11:36,067 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-22 18:11:36,067 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:36,068 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:36,068 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 18:11:36,070 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 18:11:36,072 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/unmovedTable/7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:36,073 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/unmovedTable/7f23a00cc0ec3efc597549946ee0206d empty. 2023-07-22 18:11:36,073 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/unmovedTable/7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:36,073 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-22 18:11:36,092 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:36,093 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 7f23a00cc0ec3efc597549946ee0206d, NAME => 'unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:36,111 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:36,111 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 7f23a00cc0ec3efc597549946ee0206d, disabling compactions & flushes 2023-07-22 18:11:36,111 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:36,111 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:36,111 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. after waiting 0 ms 2023-07-22 18:11:36,111 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:36,111 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:36,111 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 7f23a00cc0ec3efc597549946ee0206d: 2023-07-22 18:11:36,114 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 18:11:36,114 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690049496114"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049496114"}]},"ts":"1690049496114"} 2023-07-22 18:11:36,116 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 18:11:36,116 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 18:11:36,116 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049496116"}]},"ts":"1690049496116"} 2023-07-22 18:11:36,117 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-22 18:11:36,120 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=7f23a00cc0ec3efc597549946ee0206d, ASSIGN}] 2023-07-22 18:11:36,122 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=7f23a00cc0ec3efc597549946ee0206d, ASSIGN 2023-07-22 18:11:36,123 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=7f23a00cc0ec3efc597549946ee0206d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45471,1690049478954; forceNewPlan=false, retain=false 2023-07-22 18:11:36,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-22 18:11:36,275 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=7f23a00cc0ec3efc597549946ee0206d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:36,275 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690049496275"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049496275"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049496275"}]},"ts":"1690049496275"} 2023-07-22 18:11:36,276 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure 7f23a00cc0ec3efc597549946ee0206d, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:36,367 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-22 18:11:36,432 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:36,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7f23a00cc0ec3efc597549946ee0206d, NAME => 'unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:36,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:36,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:36,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:36,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:36,433 INFO [StoreOpener-7f23a00cc0ec3efc597549946ee0206d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:36,435 DEBUG [StoreOpener-7f23a00cc0ec3efc597549946ee0206d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/unmovedTable/7f23a00cc0ec3efc597549946ee0206d/ut 2023-07-22 18:11:36,435 DEBUG [StoreOpener-7f23a00cc0ec3efc597549946ee0206d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/unmovedTable/7f23a00cc0ec3efc597549946ee0206d/ut 2023-07-22 18:11:36,435 INFO [StoreOpener-7f23a00cc0ec3efc597549946ee0206d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7f23a00cc0ec3efc597549946ee0206d columnFamilyName ut 2023-07-22 18:11:36,436 INFO [StoreOpener-7f23a00cc0ec3efc597549946ee0206d-1] regionserver.HStore(310): Store=7f23a00cc0ec3efc597549946ee0206d/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:36,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/unmovedTable/7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:36,437 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/unmovedTable/7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:36,440 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:36,441 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/unmovedTable/7f23a00cc0ec3efc597549946ee0206d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:36,442 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7f23a00cc0ec3efc597549946ee0206d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11808355520, jitterRate=0.09973880648612976}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:36,442 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7f23a00cc0ec3efc597549946ee0206d: 2023-07-22 18:11:36,443 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d., pid=122, masterSystemTime=1690049496428 2023-07-22 18:11:36,444 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:36,444 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:36,445 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=7f23a00cc0ec3efc597549946ee0206d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:36,445 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690049496445"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049496445"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049496445"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049496445"}]},"ts":"1690049496445"} 2023-07-22 18:11:36,448 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-22 18:11:36,448 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure 7f23a00cc0ec3efc597549946ee0206d, server=jenkins-hbase4.apache.org,45471,1690049478954 in 170 msec 2023-07-22 18:11:36,449 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-22 18:11:36,450 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=7f23a00cc0ec3efc597549946ee0206d, ASSIGN in 328 msec 2023-07-22 18:11:36,450 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 18:11:36,450 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049496450"}]},"ts":"1690049496450"} 2023-07-22 18:11:36,451 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-22 18:11:36,455 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 18:11:36,458 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=unmovedTable in 393 msec 2023-07-22 18:11:36,669 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-22 18:11:36,669 INFO [Listener at localhost/37829] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 120 completed 2023-07-22 18:11:36,669 DEBUG [Listener at localhost/37829] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-22 18:11:36,669 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:36,673 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-22 18:11:36,673 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:36,673 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-22 18:11:36,675 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-22 18:11:36,677 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-22 18:11:36,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-22 18:11:36,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:36,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:36,679 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 18:11:36,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-22 18:11:36,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(345): Moving region 7f23a00cc0ec3efc597549946ee0206d to RSGroup normal 2023-07-22 18:11:36,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=7f23a00cc0ec3efc597549946ee0206d, REOPEN/MOVE 2023-07-22 18:11:36,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-22 18:11:36,682 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=7f23a00cc0ec3efc597549946ee0206d, REOPEN/MOVE 2023-07-22 18:11:36,683 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=7f23a00cc0ec3efc597549946ee0206d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:36,683 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690049496682"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049496682"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049496682"}]},"ts":"1690049496682"} 2023-07-22 18:11:36,684 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure 7f23a00cc0ec3efc597549946ee0206d, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:36,841 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:36,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7f23a00cc0ec3efc597549946ee0206d, disabling compactions & flushes 2023-07-22 18:11:36,843 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:36,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:36,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. after waiting 0 ms 2023-07-22 18:11:36,843 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:36,847 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/unmovedTable/7f23a00cc0ec3efc597549946ee0206d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:36,848 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:36,848 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7f23a00cc0ec3efc597549946ee0206d: 2023-07-22 18:11:36,848 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7f23a00cc0ec3efc597549946ee0206d move to jenkins-hbase4.apache.org,38977,1690049474061 record at close sequenceid=2 2023-07-22 18:11:36,849 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:36,850 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=7f23a00cc0ec3efc597549946ee0206d, regionState=CLOSED 2023-07-22 18:11:36,850 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690049496850"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049496850"}]},"ts":"1690049496850"} 2023-07-22 18:11:36,853 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-22 18:11:36,853 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure 7f23a00cc0ec3efc597549946ee0206d, server=jenkins-hbase4.apache.org,45471,1690049478954 in 167 msec 2023-07-22 18:11:36,854 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=7f23a00cc0ec3efc597549946ee0206d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38977,1690049474061; forceNewPlan=false, retain=false 2023-07-22 18:11:37,004 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=7f23a00cc0ec3efc597549946ee0206d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:37,004 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690049497004"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049497004"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049497004"}]},"ts":"1690049497004"} 2023-07-22 18:11:37,006 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure 7f23a00cc0ec3efc597549946ee0206d, server=jenkins-hbase4.apache.org,38977,1690049474061}] 2023-07-22 18:11:37,162 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:37,162 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7f23a00cc0ec3efc597549946ee0206d, NAME => 'unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:37,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:37,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:37,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:37,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:37,164 INFO [StoreOpener-7f23a00cc0ec3efc597549946ee0206d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:37,165 DEBUG [StoreOpener-7f23a00cc0ec3efc597549946ee0206d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/unmovedTable/7f23a00cc0ec3efc597549946ee0206d/ut 2023-07-22 18:11:37,166 DEBUG [StoreOpener-7f23a00cc0ec3efc597549946ee0206d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/unmovedTable/7f23a00cc0ec3efc597549946ee0206d/ut 2023-07-22 18:11:37,166 INFO [StoreOpener-7f23a00cc0ec3efc597549946ee0206d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7f23a00cc0ec3efc597549946ee0206d columnFamilyName ut 2023-07-22 18:11:37,166 INFO [StoreOpener-7f23a00cc0ec3efc597549946ee0206d-1] regionserver.HStore(310): Store=7f23a00cc0ec3efc597549946ee0206d/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:37,167 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/unmovedTable/7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:37,169 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/unmovedTable/7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:37,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:37,173 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7f23a00cc0ec3efc597549946ee0206d; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10971426400, jitterRate=0.021793708205223083}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:37,173 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7f23a00cc0ec3efc597549946ee0206d: 2023-07-22 18:11:37,176 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d., pid=125, masterSystemTime=1690049497158 2023-07-22 18:11:37,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:37,178 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:37,178 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=7f23a00cc0ec3efc597549946ee0206d, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:37,178 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690049497178"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049497178"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049497178"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049497178"}]},"ts":"1690049497178"} 2023-07-22 18:11:37,181 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-22 18:11:37,182 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure 7f23a00cc0ec3efc597549946ee0206d, server=jenkins-hbase4.apache.org,38977,1690049474061 in 173 msec 2023-07-22 18:11:37,183 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=7f23a00cc0ec3efc597549946ee0206d, REOPEN/MOVE in 500 msec 2023-07-22 18:11:37,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-22 18:11:37,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-22 18:11:37,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:37,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:37,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:37,688 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:37,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-22 18:11:37,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 18:11:37,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-22 18:11:37,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:37,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-22 18:11:37,691 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 18:11:37,692 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-22 18:11:37,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-22 18:11:37,694 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:37,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:37,695 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-22 18:11:37,697 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-22 18:11:37,699 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-22 18:11:37,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:37,702 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:37,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-22 18:11:37,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:37,705 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-22 18:11:37,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 18:11:37,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-22 18:11:37,706 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 18:11:37,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:37,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:37,712 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-22 18:11:37,713 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-22 18:11:37,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:37,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:37,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-22 18:11:37,714 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 18:11:37,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-22 18:11:37,719 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(345): Moving region 7f23a00cc0ec3efc597549946ee0206d to RSGroup default 2023-07-22 18:11:37,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=7f23a00cc0ec3efc597549946ee0206d, REOPEN/MOVE 2023-07-22 18:11:37,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-22 18:11:37,720 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=7f23a00cc0ec3efc597549946ee0206d, REOPEN/MOVE 2023-07-22 18:11:37,721 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=7f23a00cc0ec3efc597549946ee0206d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:37,721 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690049497721"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049497721"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049497721"}]},"ts":"1690049497721"} 2023-07-22 18:11:37,722 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 7f23a00cc0ec3efc597549946ee0206d, server=jenkins-hbase4.apache.org,38977,1690049474061}] 2023-07-22 18:11:37,875 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:37,876 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7f23a00cc0ec3efc597549946ee0206d, disabling compactions & flushes 2023-07-22 18:11:37,876 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:37,876 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:37,876 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. after waiting 0 ms 2023-07-22 18:11:37,876 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:37,880 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/unmovedTable/7f23a00cc0ec3efc597549946ee0206d/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 18:11:37,881 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:37,881 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7f23a00cc0ec3efc597549946ee0206d: 2023-07-22 18:11:37,881 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 7f23a00cc0ec3efc597549946ee0206d move to jenkins-hbase4.apache.org,45471,1690049478954 record at close sequenceid=5 2023-07-22 18:11:37,882 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:37,883 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=7f23a00cc0ec3efc597549946ee0206d, regionState=CLOSED 2023-07-22 18:11:37,883 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690049497883"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049497883"}]},"ts":"1690049497883"} 2023-07-22 18:11:37,885 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-22 18:11:37,885 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 7f23a00cc0ec3efc597549946ee0206d, server=jenkins-hbase4.apache.org,38977,1690049474061 in 162 msec 2023-07-22 18:11:37,886 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=7f23a00cc0ec3efc597549946ee0206d, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,45471,1690049478954; forceNewPlan=false, retain=false 2023-07-22 18:11:38,036 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=7f23a00cc0ec3efc597549946ee0206d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:38,036 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690049498036"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049498036"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049498036"}]},"ts":"1690049498036"} 2023-07-22 18:11:38,038 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 7f23a00cc0ec3efc597549946ee0206d, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:38,193 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:38,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 7f23a00cc0ec3efc597549946ee0206d, NAME => 'unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:38,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:38,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:38,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:38,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:38,195 INFO [StoreOpener-7f23a00cc0ec3efc597549946ee0206d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:38,196 DEBUG [StoreOpener-7f23a00cc0ec3efc597549946ee0206d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/unmovedTable/7f23a00cc0ec3efc597549946ee0206d/ut 2023-07-22 18:11:38,196 DEBUG [StoreOpener-7f23a00cc0ec3efc597549946ee0206d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/unmovedTable/7f23a00cc0ec3efc597549946ee0206d/ut 2023-07-22 18:11:38,196 INFO [StoreOpener-7f23a00cc0ec3efc597549946ee0206d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 7f23a00cc0ec3efc597549946ee0206d columnFamilyName ut 2023-07-22 18:11:38,197 INFO [StoreOpener-7f23a00cc0ec3efc597549946ee0206d-1] regionserver.HStore(310): Store=7f23a00cc0ec3efc597549946ee0206d/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:38,197 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/unmovedTable/7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:38,198 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/unmovedTable/7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:38,201 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:38,202 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 7f23a00cc0ec3efc597549946ee0206d; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9979537600, jitterRate=-0.07058313488960266}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:38,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 7f23a00cc0ec3efc597549946ee0206d: 2023-07-22 18:11:38,202 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d., pid=128, masterSystemTime=1690049498190 2023-07-22 18:11:38,203 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:38,203 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:38,204 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=7f23a00cc0ec3efc597549946ee0206d, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:38,204 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1690049498204"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049498204"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049498204"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049498204"}]},"ts":"1690049498204"} 2023-07-22 18:11:38,206 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-22 18:11:38,206 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 7f23a00cc0ec3efc597549946ee0206d, server=jenkins-hbase4.apache.org,45471,1690049478954 in 167 msec 2023-07-22 18:11:38,207 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=7f23a00cc0ec3efc597549946ee0206d, REOPEN/MOVE in 487 msec 2023-07-22 18:11:38,363 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-22 18:11:38,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-22 18:11:38,720 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-22 18:11:38,721 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:38,722 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:38977] to rsgroup default 2023-07-22 18:11:38,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-22 18:11:38,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:38,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:38,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-22 18:11:38,725 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 18:11:38,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-22 18:11:38,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,38977,1690049474061] are moved back to normal 2023-07-22 18:11:38,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-22 18:11:38,727 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:38,728 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-22 18:11:38,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:38,733 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:38,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-22 18:11:38,734 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-22 18:11:38,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:38,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:38,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:38,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:38,736 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:38,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:38,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:38,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:38,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-22 18:11:38,741 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-22 18:11:38,742 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:38,744 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-22 18:11:38,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:38,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-22 18:11:38,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:38,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-22 18:11:38,747 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(345): Moving region 5ad742ffe0eb92d1c27e3c0036eef8fa to RSGroup default 2023-07-22 18:11:38,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=5ad742ffe0eb92d1c27e3c0036eef8fa, REOPEN/MOVE 2023-07-22 18:11:38,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-22 18:11:38,748 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=5ad742ffe0eb92d1c27e3c0036eef8fa, REOPEN/MOVE 2023-07-22 18:11:38,749 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=5ad742ffe0eb92d1c27e3c0036eef8fa, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:38,749 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690049498749"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049498749"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049498749"}]},"ts":"1690049498749"} 2023-07-22 18:11:38,750 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure 5ad742ffe0eb92d1c27e3c0036eef8fa, server=jenkins-hbase4.apache.org,33411,1690049473844}] 2023-07-22 18:11:38,903 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:38,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5ad742ffe0eb92d1c27e3c0036eef8fa, disabling compactions & flushes 2023-07-22 18:11:38,904 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:38,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:38,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. after waiting 0 ms 2023-07-22 18:11:38,904 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:38,908 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/testRename/5ad742ffe0eb92d1c27e3c0036eef8fa/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-22 18:11:38,910 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:38,910 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5ad742ffe0eb92d1c27e3c0036eef8fa: 2023-07-22 18:11:38,910 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 5ad742ffe0eb92d1c27e3c0036eef8fa move to jenkins-hbase4.apache.org,38977,1690049474061 record at close sequenceid=5 2023-07-22 18:11:38,911 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:38,911 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=5ad742ffe0eb92d1c27e3c0036eef8fa, regionState=CLOSED 2023-07-22 18:11:38,912 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690049498911"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049498911"}]},"ts":"1690049498911"} 2023-07-22 18:11:38,914 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-22 18:11:38,915 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure 5ad742ffe0eb92d1c27e3c0036eef8fa, server=jenkins-hbase4.apache.org,33411,1690049473844 in 163 msec 2023-07-22 18:11:38,915 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=5ad742ffe0eb92d1c27e3c0036eef8fa, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,38977,1690049474061; forceNewPlan=false, retain=false 2023-07-22 18:11:39,065 INFO [jenkins-hbase4:40289] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 18:11:39,066 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=5ad742ffe0eb92d1c27e3c0036eef8fa, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:39,066 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690049499066"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049499066"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049499066"}]},"ts":"1690049499066"} 2023-07-22 18:11:39,068 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure 5ad742ffe0eb92d1c27e3c0036eef8fa, server=jenkins-hbase4.apache.org,38977,1690049474061}] 2023-07-22 18:11:39,223 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:39,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5ad742ffe0eb92d1c27e3c0036eef8fa, NAME => 'testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:39,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:39,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:39,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:39,223 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:39,225 INFO [StoreOpener-5ad742ffe0eb92d1c27e3c0036eef8fa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:39,226 DEBUG [StoreOpener-5ad742ffe0eb92d1c27e3c0036eef8fa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/testRename/5ad742ffe0eb92d1c27e3c0036eef8fa/tr 2023-07-22 18:11:39,226 DEBUG [StoreOpener-5ad742ffe0eb92d1c27e3c0036eef8fa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/testRename/5ad742ffe0eb92d1c27e3c0036eef8fa/tr 2023-07-22 18:11:39,226 INFO [StoreOpener-5ad742ffe0eb92d1c27e3c0036eef8fa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5ad742ffe0eb92d1c27e3c0036eef8fa columnFamilyName tr 2023-07-22 18:11:39,227 INFO [StoreOpener-5ad742ffe0eb92d1c27e3c0036eef8fa-1] regionserver.HStore(310): Store=5ad742ffe0eb92d1c27e3c0036eef8fa/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:39,227 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/testRename/5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:39,228 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/testRename/5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:39,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:39,232 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5ad742ffe0eb92d1c27e3c0036eef8fa; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11028800000, jitterRate=0.027137041091918945}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:39,232 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5ad742ffe0eb92d1c27e3c0036eef8fa: 2023-07-22 18:11:39,233 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa., pid=131, masterSystemTime=1690049499219 2023-07-22 18:11:39,234 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:39,234 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:39,234 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=5ad742ffe0eb92d1c27e3c0036eef8fa, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:39,235 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1690049499234"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049499234"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049499234"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049499234"}]},"ts":"1690049499234"} 2023-07-22 18:11:39,237 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-22 18:11:39,237 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure 5ad742ffe0eb92d1c27e3c0036eef8fa, server=jenkins-hbase4.apache.org,38977,1690049474061 in 168 msec 2023-07-22 18:11:39,238 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=5ad742ffe0eb92d1c27e3c0036eef8fa, REOPEN/MOVE in 490 msec 2023-07-22 18:11:39,748 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-22 18:11:39,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-22 18:11:39,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:39,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507] to rsgroup default 2023-07-22 18:11:39,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:39,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-22 18:11:39,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:39,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-22 18:11:39,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33411,1690049473844, jenkins-hbase4.apache.org,38507,1690049474291] are moved back to newgroup 2023-07-22 18:11:39,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-22 18:11:39,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:39,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-22 18:11:39,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:39,759 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:39,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:39,763 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:39,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:39,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:39,765 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:39,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:39,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:39,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:39,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:39,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40289] to rsgroup master 2023-07-22 18:11:39,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:39,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 760 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34802 deadline: 1690050699779, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. 2023-07-22 18:11:39,780 WARN [Listener at localhost/37829] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:39,781 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:39,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:39,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:39,782 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:45471], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:39,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:39,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:39,799 INFO [Listener at localhost/37829] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=507 (was 514), OpenFileDescriptor=775 (was 801), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=438 (was 416) - SystemLoadAverage LEAK? -, ProcessCount=172 (was 174), AvailableMemoryMB=8375 (was 6209) - AvailableMemoryMB LEAK? - 2023-07-22 18:11:39,799 WARN [Listener at localhost/37829] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-22 18:11:39,815 INFO [Listener at localhost/37829] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=507, OpenFileDescriptor=775, MaxFileDescriptor=60000, SystemLoadAverage=438, ProcessCount=172, AvailableMemoryMB=8375 2023-07-22 18:11:39,815 WARN [Listener at localhost/37829] hbase.ResourceChecker(130): Thread=507 is superior to 500 2023-07-22 18:11:39,815 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-22 18:11:39,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:39,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:39,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:39,819 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:39,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:39,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:39,820 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:39,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:39,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:39,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:39,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:39,828 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:39,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:39,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:39,831 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:39,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:39,835 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:39,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:39,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:39,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40289] to rsgroup master 2023-07-22 18:11:39,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:39,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 788 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34802 deadline: 1690050699839, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. 2023-07-22 18:11:39,840 WARN [Listener at localhost/37829] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:39,842 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:39,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:39,842 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:39,842 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:45471], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:39,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:39,843 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:39,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-22 18:11:39,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 18:11:39,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-22 18:11:39,849 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-22 18:11:39,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-22 18:11:39,850 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:39,851 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-22 18:11:39,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:39,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 800 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:34802 deadline: 1690050699851, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-22 18:11:39,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-22 18:11:39,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:39,853 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 803 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:34802 deadline: 1690050699853, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-22 18:11:39,855 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-22 18:11:39,855 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-22 18:11:39,860 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-22 18:11:39,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:39,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 807 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:34802 deadline: 1690050699859, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-22 18:11:39,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:39,864 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:39,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:39,865 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:39,865 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:39,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:39,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:39,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:39,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:39,870 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:39,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:39,873 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:39,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:39,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:39,876 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:39,878 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:39,879 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:39,882 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:39,882 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:39,884 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40289] to rsgroup master 2023-07-22 18:11:39,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:39,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 831 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34802 deadline: 1690050699884, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. 2023-07-22 18:11:39,887 WARN [Listener at localhost/37829] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:39,888 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:39,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:39,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:39,889 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:45471], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:39,889 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:39,890 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:39,905 INFO [Listener at localhost/37829] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=511 (was 507) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7988d72f-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-22 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x7988d72f-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=775 (was 775), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=438 (was 438), ProcessCount=172 (was 172), AvailableMemoryMB=8375 (was 8375) 2023-07-22 18:11:39,905 WARN [Listener at localhost/37829] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-22 18:11:39,921 INFO [Listener at localhost/37829] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=511, OpenFileDescriptor=775, MaxFileDescriptor=60000, SystemLoadAverage=438, ProcessCount=172, AvailableMemoryMB=8374 2023-07-22 18:11:39,921 WARN [Listener at localhost/37829] hbase.ResourceChecker(130): Thread=511 is superior to 500 2023-07-22 18:11:39,921 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-22 18:11:39,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:39,925 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:39,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:39,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:39,926 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:39,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:39,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:39,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:39,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:39,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:39,932 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:39,935 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:39,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:39,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:39,938 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:39,939 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:39,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:39,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:39,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:39,947 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40289] to rsgroup master 2023-07-22 18:11:39,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:39,947 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 859 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34802 deadline: 1690050699946, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. 2023-07-22 18:11:39,947 WARN [Listener at localhost/37829] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:39,949 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:39,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:39,950 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:39,951 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:45471], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:39,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:39,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:39,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:39,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:39,954 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_1131336537 2023-07-22 18:11:39,956 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1131336537 2023-07-22 18:11:39,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:39,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:39,961 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:39,962 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:39,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:39,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:39,967 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507] to rsgroup Group_testDisabledTableMove_1131336537 2023-07-22 18:11:39,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1131336537 2023-07-22 18:11:39,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:39,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:39,970 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:39,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-22 18:11:39,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33411,1690049473844, jenkins-hbase4.apache.org,38507,1690049474291] are moved back to default 2023-07-22 18:11:39,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1131336537 2023-07-22 18:11:39,971 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:39,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:39,974 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:39,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1131336537 2023-07-22 18:11:39,976 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:39,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:39,979 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-22 18:11:39,981 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 18:11:39,981 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 132 2023-07-22 18:11:39,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-22 18:11:39,983 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1131336537 2023-07-22 18:11:39,984 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:39,984 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:39,985 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:39,990 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 18:11:39,995 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/68071b2a0b4e53417e3c03cc53e205f9 2023-07-22 18:11:39,995 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/cc3e46be1d1407e823960e6f5e389a30 2023-07-22 18:11:39,995 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/1dd0f0f8e083b3a547085dafd6549986 2023-07-22 18:11:39,995 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/8a7eb60124f79e2681d99edaf849e293 2023-07-22 18:11:39,995 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/38b4582f463aa5141cee0a119a130a5e 2023-07-22 18:11:39,996 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/68071b2a0b4e53417e3c03cc53e205f9 empty. 2023-07-22 18:11:39,996 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/8a7eb60124f79e2681d99edaf849e293 empty. 2023-07-22 18:11:39,996 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/1dd0f0f8e083b3a547085dafd6549986 empty. 2023-07-22 18:11:39,996 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/cc3e46be1d1407e823960e6f5e389a30 empty. 2023-07-22 18:11:39,996 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/8a7eb60124f79e2681d99edaf849e293 2023-07-22 18:11:39,996 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/1dd0f0f8e083b3a547085dafd6549986 2023-07-22 18:11:39,997 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/68071b2a0b4e53417e3c03cc53e205f9 2023-07-22 18:11:39,997 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/38b4582f463aa5141cee0a119a130a5e empty. 2023-07-22 18:11:39,997 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/cc3e46be1d1407e823960e6f5e389a30 2023-07-22 18:11:39,997 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/38b4582f463aa5141cee0a119a130a5e 2023-07-22 18:11:39,997 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-22 18:11:40,021 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:40,022 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 68071b2a0b4e53417e3c03cc53e205f9, NAME => 'Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:40,022 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => 38b4582f463aa5141cee0a119a130a5e, NAME => 'Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:40,022 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 1dd0f0f8e083b3a547085dafd6549986, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:40,062 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:40,062 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 68071b2a0b4e53417e3c03cc53e205f9, disabling compactions & flushes 2023-07-22 18:11:40,062 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9. 2023-07-22 18:11:40,062 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9. 2023-07-22 18:11:40,063 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9. after waiting 0 ms 2023-07-22 18:11:40,063 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9. 2023-07-22 18:11:40,063 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9. 2023-07-22 18:11:40,063 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 68071b2a0b4e53417e3c03cc53e205f9: 2023-07-22 18:11:40,063 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => cc3e46be1d1407e823960e6f5e389a30, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:40,067 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:40,067 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 1dd0f0f8e083b3a547085dafd6549986, disabling compactions & flushes 2023-07-22 18:11:40,067 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986. 2023-07-22 18:11:40,067 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986. 2023-07-22 18:11:40,068 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986. after waiting 0 ms 2023-07-22 18:11:40,068 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986. 2023-07-22 18:11:40,068 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986. 2023-07-22 18:11:40,068 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 1dd0f0f8e083b3a547085dafd6549986: 2023-07-22 18:11:40,068 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:40,068 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing 38b4582f463aa5141cee0a119a130a5e, disabling compactions & flushes 2023-07-22 18:11:40,068 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 8a7eb60124f79e2681d99edaf849e293, NAME => 'Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp 2023-07-22 18:11:40,068 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e. 2023-07-22 18:11:40,068 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e. 2023-07-22 18:11:40,068 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e. after waiting 0 ms 2023-07-22 18:11:40,069 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e. 2023-07-22 18:11:40,069 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e. 2023-07-22 18:11:40,069 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for 38b4582f463aa5141cee0a119a130a5e: 2023-07-22 18:11:40,080 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:40,080 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing cc3e46be1d1407e823960e6f5e389a30, disabling compactions & flushes 2023-07-22 18:11:40,080 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30. 2023-07-22 18:11:40,080 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30. 2023-07-22 18:11:40,080 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30. after waiting 0 ms 2023-07-22 18:11:40,080 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30. 2023-07-22 18:11:40,081 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30. 2023-07-22 18:11:40,081 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for cc3e46be1d1407e823960e6f5e389a30: 2023-07-22 18:11:40,083 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-22 18:11:40,085 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:40,085 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 8a7eb60124f79e2681d99edaf849e293, disabling compactions & flushes 2023-07-22 18:11:40,085 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293. 2023-07-22 18:11:40,086 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293. 2023-07-22 18:11:40,086 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293. after waiting 0 ms 2023-07-22 18:11:40,086 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293. 2023-07-22 18:11:40,086 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293. 2023-07-22 18:11:40,086 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 8a7eb60124f79e2681d99edaf849e293: 2023-07-22 18:11:40,088 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 18:11:40,089 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690049500089"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049500089"}]},"ts":"1690049500089"} 2023-07-22 18:11:40,089 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690049500089"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049500089"}]},"ts":"1690049500089"} 2023-07-22 18:11:40,089 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690049500089"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049500089"}]},"ts":"1690049500089"} 2023-07-22 18:11:40,089 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690049500089"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049500089"}]},"ts":"1690049500089"} 2023-07-22 18:11:40,089 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690049500089"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049500089"}]},"ts":"1690049500089"} 2023-07-22 18:11:40,091 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-22 18:11:40,092 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 18:11:40,092 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049500092"}]},"ts":"1690049500092"} 2023-07-22 18:11:40,093 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-22 18:11:40,096 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:40,096 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:40,096 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:40,096 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:40,097 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=68071b2a0b4e53417e3c03cc53e205f9, ASSIGN}, {pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=38b4582f463aa5141cee0a119a130a5e, ASSIGN}, {pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1dd0f0f8e083b3a547085dafd6549986, ASSIGN}, {pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cc3e46be1d1407e823960e6f5e389a30, ASSIGN}, {pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8a7eb60124f79e2681d99edaf849e293, ASSIGN}] 2023-07-22 18:11:40,099 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cc3e46be1d1407e823960e6f5e389a30, ASSIGN 2023-07-22 18:11:40,099 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1dd0f0f8e083b3a547085dafd6549986, ASSIGN 2023-07-22 18:11:40,099 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=38b4582f463aa5141cee0a119a130a5e, ASSIGN 2023-07-22 18:11:40,099 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=68071b2a0b4e53417e3c03cc53e205f9, ASSIGN 2023-07-22 18:11:40,100 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cc3e46be1d1407e823960e6f5e389a30, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38977,1690049474061; forceNewPlan=false, retain=false 2023-07-22 18:11:40,100 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1dd0f0f8e083b3a547085dafd6549986, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38977,1690049474061; forceNewPlan=false, retain=false 2023-07-22 18:11:40,100 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=38b4582f463aa5141cee0a119a130a5e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45471,1690049478954; forceNewPlan=false, retain=false 2023-07-22 18:11:40,100 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=68071b2a0b4e53417e3c03cc53e205f9, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45471,1690049478954; forceNewPlan=false, retain=false 2023-07-22 18:11:40,100 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8a7eb60124f79e2681d99edaf849e293, ASSIGN 2023-07-22 18:11:40,101 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8a7eb60124f79e2681d99edaf849e293, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,45471,1690049478954; forceNewPlan=false, retain=false 2023-07-22 18:11:40,250 INFO [jenkins-hbase4:40289] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-22 18:11:40,255 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=cc3e46be1d1407e823960e6f5e389a30, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:40,255 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=1dd0f0f8e083b3a547085dafd6549986, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:40,255 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690049500255"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049500255"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049500255"}]},"ts":"1690049500255"} 2023-07-22 18:11:40,255 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=38b4582f463aa5141cee0a119a130a5e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:40,255 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=8a7eb60124f79e2681d99edaf849e293, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:40,255 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690049500255"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049500255"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049500255"}]},"ts":"1690049500255"} 2023-07-22 18:11:40,255 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690049500255"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049500255"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049500255"}]},"ts":"1690049500255"} 2023-07-22 18:11:40,255 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=68071b2a0b4e53417e3c03cc53e205f9, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:40,255 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690049500255"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049500255"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049500255"}]},"ts":"1690049500255"} 2023-07-22 18:11:40,256 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690049500255"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049500255"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049500255"}]},"ts":"1690049500255"} 2023-07-22 18:11:40,257 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=136, state=RUNNABLE; OpenRegionProcedure cc3e46be1d1407e823960e6f5e389a30, server=jenkins-hbase4.apache.org,38977,1690049474061}] 2023-07-22 18:11:40,258 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=134, state=RUNNABLE; OpenRegionProcedure 38b4582f463aa5141cee0a119a130a5e, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:40,258 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=137, state=RUNNABLE; OpenRegionProcedure 8a7eb60124f79e2681d99edaf849e293, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:40,259 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=135, state=RUNNABLE; OpenRegionProcedure 1dd0f0f8e083b3a547085dafd6549986, server=jenkins-hbase4.apache.org,38977,1690049474061}] 2023-07-22 18:11:40,261 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=133, state=RUNNABLE; OpenRegionProcedure 68071b2a0b4e53417e3c03cc53e205f9, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:40,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-22 18:11:40,413 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986. 2023-07-22 18:11:40,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1dd0f0f8e083b3a547085dafd6549986, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-22 18:11:40,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 1dd0f0f8e083b3a547085dafd6549986 2023-07-22 18:11:40,413 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e. 2023-07-22 18:11:40,413 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:40,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1dd0f0f8e083b3a547085dafd6549986 2023-07-22 18:11:40,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1dd0f0f8e083b3a547085dafd6549986 2023-07-22 18:11:40,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 38b4582f463aa5141cee0a119a130a5e, NAME => 'Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-22 18:11:40,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 38b4582f463aa5141cee0a119a130a5e 2023-07-22 18:11:40,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:40,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 38b4582f463aa5141cee0a119a130a5e 2023-07-22 18:11:40,414 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 38b4582f463aa5141cee0a119a130a5e 2023-07-22 18:11:40,419 INFO [StoreOpener-1dd0f0f8e083b3a547085dafd6549986-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1dd0f0f8e083b3a547085dafd6549986 2023-07-22 18:11:40,419 INFO [StoreOpener-38b4582f463aa5141cee0a119a130a5e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 38b4582f463aa5141cee0a119a130a5e 2023-07-22 18:11:40,420 DEBUG [StoreOpener-1dd0f0f8e083b3a547085dafd6549986-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/1dd0f0f8e083b3a547085dafd6549986/f 2023-07-22 18:11:40,420 DEBUG [StoreOpener-1dd0f0f8e083b3a547085dafd6549986-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/1dd0f0f8e083b3a547085dafd6549986/f 2023-07-22 18:11:40,420 DEBUG [StoreOpener-38b4582f463aa5141cee0a119a130a5e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/38b4582f463aa5141cee0a119a130a5e/f 2023-07-22 18:11:40,420 DEBUG [StoreOpener-38b4582f463aa5141cee0a119a130a5e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/38b4582f463aa5141cee0a119a130a5e/f 2023-07-22 18:11:40,420 INFO [StoreOpener-1dd0f0f8e083b3a547085dafd6549986-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1dd0f0f8e083b3a547085dafd6549986 columnFamilyName f 2023-07-22 18:11:40,421 INFO [StoreOpener-38b4582f463aa5141cee0a119a130a5e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 38b4582f463aa5141cee0a119a130a5e columnFamilyName f 2023-07-22 18:11:40,421 INFO [StoreOpener-1dd0f0f8e083b3a547085dafd6549986-1] regionserver.HStore(310): Store=1dd0f0f8e083b3a547085dafd6549986/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:40,421 INFO [StoreOpener-38b4582f463aa5141cee0a119a130a5e-1] regionserver.HStore(310): Store=38b4582f463aa5141cee0a119a130a5e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:40,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/1dd0f0f8e083b3a547085dafd6549986 2023-07-22 18:11:40,422 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/38b4582f463aa5141cee0a119a130a5e 2023-07-22 18:11:40,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/1dd0f0f8e083b3a547085dafd6549986 2023-07-22 18:11:40,423 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/38b4582f463aa5141cee0a119a130a5e 2023-07-22 18:11:40,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1dd0f0f8e083b3a547085dafd6549986 2023-07-22 18:11:40,426 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 38b4582f463aa5141cee0a119a130a5e 2023-07-22 18:11:40,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/1dd0f0f8e083b3a547085dafd6549986/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:40,428 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/38b4582f463aa5141cee0a119a130a5e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:40,429 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1dd0f0f8e083b3a547085dafd6549986; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10142382080, jitterRate=-0.05541706085205078}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:40,429 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 38b4582f463aa5141cee0a119a130a5e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=12026114720, jitterRate=0.12001921236515045}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:40,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1dd0f0f8e083b3a547085dafd6549986: 2023-07-22 18:11:40,429 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 38b4582f463aa5141cee0a119a130a5e: 2023-07-22 18:11:40,429 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986., pid=141, masterSystemTime=1690049500409 2023-07-22 18:11:40,429 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e., pid=139, masterSystemTime=1690049500410 2023-07-22 18:11:40,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986. 2023-07-22 18:11:40,431 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986. 2023-07-22 18:11:40,431 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30. 2023-07-22 18:11:40,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cc3e46be1d1407e823960e6f5e389a30, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-22 18:11:40,431 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=1dd0f0f8e083b3a547085dafd6549986, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:40,431 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690049500431"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049500431"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049500431"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049500431"}]},"ts":"1690049500431"} 2023-07-22 18:11:40,431 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove cc3e46be1d1407e823960e6f5e389a30 2023-07-22 18:11:40,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:40,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cc3e46be1d1407e823960e6f5e389a30 2023-07-22 18:11:40,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cc3e46be1d1407e823960e6f5e389a30 2023-07-22 18:11:40,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e. 2023-07-22 18:11:40,432 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e. 2023-07-22 18:11:40,432 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9. 2023-07-22 18:11:40,432 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 68071b2a0b4e53417e3c03cc53e205f9, NAME => 'Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-22 18:11:40,433 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 68071b2a0b4e53417e3c03cc53e205f9 2023-07-22 18:11:40,433 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:40,433 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 68071b2a0b4e53417e3c03cc53e205f9 2023-07-22 18:11:40,433 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 68071b2a0b4e53417e3c03cc53e205f9 2023-07-22 18:11:40,433 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=38b4582f463aa5141cee0a119a130a5e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:40,434 INFO [StoreOpener-cc3e46be1d1407e823960e6f5e389a30-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cc3e46be1d1407e823960e6f5e389a30 2023-07-22 18:11:40,434 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690049500433"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049500433"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049500433"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049500433"}]},"ts":"1690049500433"} 2023-07-22 18:11:40,436 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=135 2023-07-22 18:11:40,436 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=135, state=SUCCESS; OpenRegionProcedure 1dd0f0f8e083b3a547085dafd6549986, server=jenkins-hbase4.apache.org,38977,1690049474061 in 174 msec 2023-07-22 18:11:40,437 INFO [StoreOpener-68071b2a0b4e53417e3c03cc53e205f9-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 68071b2a0b4e53417e3c03cc53e205f9 2023-07-22 18:11:40,437 DEBUG [StoreOpener-cc3e46be1d1407e823960e6f5e389a30-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/cc3e46be1d1407e823960e6f5e389a30/f 2023-07-22 18:11:40,437 DEBUG [StoreOpener-cc3e46be1d1407e823960e6f5e389a30-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/cc3e46be1d1407e823960e6f5e389a30/f 2023-07-22 18:11:40,438 INFO [StoreOpener-cc3e46be1d1407e823960e6f5e389a30-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cc3e46be1d1407e823960e6f5e389a30 columnFamilyName f 2023-07-22 18:11:40,438 DEBUG [StoreOpener-68071b2a0b4e53417e3c03cc53e205f9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/68071b2a0b4e53417e3c03cc53e205f9/f 2023-07-22 18:11:40,438 INFO [StoreOpener-cc3e46be1d1407e823960e6f5e389a30-1] regionserver.HStore(310): Store=cc3e46be1d1407e823960e6f5e389a30/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:40,438 DEBUG [StoreOpener-68071b2a0b4e53417e3c03cc53e205f9-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/68071b2a0b4e53417e3c03cc53e205f9/f 2023-07-22 18:11:40,439 INFO [StoreOpener-68071b2a0b4e53417e3c03cc53e205f9-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 68071b2a0b4e53417e3c03cc53e205f9 columnFamilyName f 2023-07-22 18:11:40,440 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1dd0f0f8e083b3a547085dafd6549986, ASSIGN in 339 msec 2023-07-22 18:11:40,440 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=134 2023-07-22 18:11:40,440 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=134, state=SUCCESS; OpenRegionProcedure 38b4582f463aa5141cee0a119a130a5e, server=jenkins-hbase4.apache.org,45471,1690049478954 in 178 msec 2023-07-22 18:11:40,441 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=38b4582f463aa5141cee0a119a130a5e, ASSIGN in 343 msec 2023-07-22 18:11:40,445 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/cc3e46be1d1407e823960e6f5e389a30 2023-07-22 18:11:40,445 INFO [StoreOpener-68071b2a0b4e53417e3c03cc53e205f9-1] regionserver.HStore(310): Store=68071b2a0b4e53417e3c03cc53e205f9/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:40,445 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/cc3e46be1d1407e823960e6f5e389a30 2023-07-22 18:11:40,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/68071b2a0b4e53417e3c03cc53e205f9 2023-07-22 18:11:40,446 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/68071b2a0b4e53417e3c03cc53e205f9 2023-07-22 18:11:40,465 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cc3e46be1d1407e823960e6f5e389a30 2023-07-22 18:11:40,465 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 68071b2a0b4e53417e3c03cc53e205f9 2023-07-22 18:11:40,474 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/cc3e46be1d1407e823960e6f5e389a30/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:40,474 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cc3e46be1d1407e823960e6f5e389a30; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11798304000, jitterRate=0.09880268573760986}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:40,474 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cc3e46be1d1407e823960e6f5e389a30: 2023-07-22 18:11:40,475 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/68071b2a0b4e53417e3c03cc53e205f9/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:40,475 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30., pid=138, masterSystemTime=1690049500409 2023-07-22 18:11:40,475 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 68071b2a0b4e53417e3c03cc53e205f9; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11472520480, jitterRate=0.06846173107624054}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:40,476 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 68071b2a0b4e53417e3c03cc53e205f9: 2023-07-22 18:11:40,477 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9., pid=142, masterSystemTime=1690049500410 2023-07-22 18:11:40,479 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=cc3e46be1d1407e823960e6f5e389a30, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:40,479 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690049500479"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049500479"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049500479"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049500479"}]},"ts":"1690049500479"} 2023-07-22 18:11:40,480 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30. 2023-07-22 18:11:40,480 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30. 2023-07-22 18:11:40,480 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9. 2023-07-22 18:11:40,481 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=68071b2a0b4e53417e3c03cc53e205f9, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:40,481 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690049500481"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049500481"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049500481"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049500481"}]},"ts":"1690049500481"} 2023-07-22 18:11:40,481 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9. 2023-07-22 18:11:40,482 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293. 2023-07-22 18:11:40,482 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 8a7eb60124f79e2681d99edaf849e293, NAME => 'Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-22 18:11:40,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 8a7eb60124f79e2681d99edaf849e293 2023-07-22 18:11:40,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:40,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 8a7eb60124f79e2681d99edaf849e293 2023-07-22 18:11:40,483 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 8a7eb60124f79e2681d99edaf849e293 2023-07-22 18:11:40,483 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=136 2023-07-22 18:11:40,483 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=136, state=SUCCESS; OpenRegionProcedure cc3e46be1d1407e823960e6f5e389a30, server=jenkins-hbase4.apache.org,38977,1690049474061 in 224 msec 2023-07-22 18:11:40,485 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=133 2023-07-22 18:11:40,485 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cc3e46be1d1407e823960e6f5e389a30, ASSIGN in 386 msec 2023-07-22 18:11:40,485 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=133, state=SUCCESS; OpenRegionProcedure 68071b2a0b4e53417e3c03cc53e205f9, server=jenkins-hbase4.apache.org,45471,1690049478954 in 221 msec 2023-07-22 18:11:40,486 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=68071b2a0b4e53417e3c03cc53e205f9, ASSIGN in 388 msec 2023-07-22 18:11:40,491 INFO [StoreOpener-8a7eb60124f79e2681d99edaf849e293-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 8a7eb60124f79e2681d99edaf849e293 2023-07-22 18:11:40,493 DEBUG [StoreOpener-8a7eb60124f79e2681d99edaf849e293-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/8a7eb60124f79e2681d99edaf849e293/f 2023-07-22 18:11:40,493 DEBUG [StoreOpener-8a7eb60124f79e2681d99edaf849e293-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/8a7eb60124f79e2681d99edaf849e293/f 2023-07-22 18:11:40,493 INFO [StoreOpener-8a7eb60124f79e2681d99edaf849e293-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 8a7eb60124f79e2681d99edaf849e293 columnFamilyName f 2023-07-22 18:11:40,498 INFO [StoreOpener-8a7eb60124f79e2681d99edaf849e293-1] regionserver.HStore(310): Store=8a7eb60124f79e2681d99edaf849e293/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:40,499 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/8a7eb60124f79e2681d99edaf849e293 2023-07-22 18:11:40,501 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/8a7eb60124f79e2681d99edaf849e293 2023-07-22 18:11:40,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 8a7eb60124f79e2681d99edaf849e293 2023-07-22 18:11:40,512 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/8a7eb60124f79e2681d99edaf849e293/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:40,513 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 8a7eb60124f79e2681d99edaf849e293; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10034938720, jitterRate=-0.06542350351810455}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:40,513 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 8a7eb60124f79e2681d99edaf849e293: 2023-07-22 18:11:40,514 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293., pid=140, masterSystemTime=1690049500410 2023-07-22 18:11:40,516 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=8a7eb60124f79e2681d99edaf849e293, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:40,516 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690049500516"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049500516"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049500516"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049500516"}]},"ts":"1690049500516"} 2023-07-22 18:11:40,521 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=137 2023-07-22 18:11:40,521 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=137, state=SUCCESS; OpenRegionProcedure 8a7eb60124f79e2681d99edaf849e293, server=jenkins-hbase4.apache.org,45471,1690049478954 in 260 msec 2023-07-22 18:11:40,523 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=137, resume processing ppid=132 2023-07-22 18:11:40,523 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8a7eb60124f79e2681d99edaf849e293, ASSIGN in 424 msec 2023-07-22 18:11:40,524 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293. 2023-07-22 18:11:40,524 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293. 2023-07-22 18:11:40,524 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 18:11:40,524 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049500524"}]},"ts":"1690049500524"} 2023-07-22 18:11:40,526 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-22 18:11:40,528 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 18:11:40,530 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 550 msec 2023-07-22 18:11:40,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-22 18:11:40,586 INFO [Listener at localhost/37829] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 132 completed 2023-07-22 18:11:40,586 DEBUG [Listener at localhost/37829] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-22 18:11:40,586 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:40,590 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-22 18:11:40,590 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:40,591 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-22 18:11:40,591 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:40,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-22 18:11:40,597 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 18:11:40,597 INFO [Listener at localhost/37829] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-22 18:11:40,598 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-22 18:11:40,599 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-22 18:11:40,601 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-22 18:11:40,602 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049500602"}]},"ts":"1690049500602"} 2023-07-22 18:11:40,603 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-22 18:11:40,606 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-22 18:11:40,607 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=68071b2a0b4e53417e3c03cc53e205f9, UNASSIGN}, {pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=38b4582f463aa5141cee0a119a130a5e, UNASSIGN}, {pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1dd0f0f8e083b3a547085dafd6549986, UNASSIGN}, {pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cc3e46be1d1407e823960e6f5e389a30, UNASSIGN}, {pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8a7eb60124f79e2681d99edaf849e293, UNASSIGN}] 2023-07-22 18:11:40,609 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1dd0f0f8e083b3a547085dafd6549986, UNASSIGN 2023-07-22 18:11:40,609 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=68071b2a0b4e53417e3c03cc53e205f9, UNASSIGN 2023-07-22 18:11:40,609 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=38b4582f463aa5141cee0a119a130a5e, UNASSIGN 2023-07-22 18:11:40,609 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8a7eb60124f79e2681d99edaf849e293, UNASSIGN 2023-07-22 18:11:40,609 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cc3e46be1d1407e823960e6f5e389a30, UNASSIGN 2023-07-22 18:11:40,610 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=1dd0f0f8e083b3a547085dafd6549986, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:40,610 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=68071b2a0b4e53417e3c03cc53e205f9, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:40,610 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690049500610"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049500610"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049500610"}]},"ts":"1690049500610"} 2023-07-22 18:11:40,610 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690049500610"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049500610"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049500610"}]},"ts":"1690049500610"} 2023-07-22 18:11:40,610 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=38b4582f463aa5141cee0a119a130a5e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:40,610 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=8a7eb60124f79e2681d99edaf849e293, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:40,610 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690049500610"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049500610"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049500610"}]},"ts":"1690049500610"} 2023-07-22 18:11:40,610 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690049500610"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049500610"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049500610"}]},"ts":"1690049500610"} 2023-07-22 18:11:40,610 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=cc3e46be1d1407e823960e6f5e389a30, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:40,611 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690049500610"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049500610"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049500610"}]},"ts":"1690049500610"} 2023-07-22 18:11:40,612 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=146, state=RUNNABLE; CloseRegionProcedure 1dd0f0f8e083b3a547085dafd6549986, server=jenkins-hbase4.apache.org,38977,1690049474061}] 2023-07-22 18:11:40,613 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=144, state=RUNNABLE; CloseRegionProcedure 68071b2a0b4e53417e3c03cc53e205f9, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:40,614 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=145, state=RUNNABLE; CloseRegionProcedure 38b4582f463aa5141cee0a119a130a5e, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:40,615 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=148, state=RUNNABLE; CloseRegionProcedure 8a7eb60124f79e2681d99edaf849e293, server=jenkins-hbase4.apache.org,45471,1690049478954}] 2023-07-22 18:11:40,616 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=147, state=RUNNABLE; CloseRegionProcedure cc3e46be1d1407e823960e6f5e389a30, server=jenkins-hbase4.apache.org,38977,1690049474061}] 2023-07-22 18:11:40,702 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-22 18:11:40,711 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-22 18:11:40,711 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'Group_testDisabledTableMove' 2023-07-22 18:11:40,764 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1dd0f0f8e083b3a547085dafd6549986 2023-07-22 18:11:40,765 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1dd0f0f8e083b3a547085dafd6549986, disabling compactions & flushes 2023-07-22 18:11:40,766 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986. 2023-07-22 18:11:40,766 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986. 2023-07-22 18:11:40,766 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986. after waiting 0 ms 2023-07-22 18:11:40,766 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986. 2023-07-22 18:11:40,766 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 38b4582f463aa5141cee0a119a130a5e 2023-07-22 18:11:40,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 38b4582f463aa5141cee0a119a130a5e, disabling compactions & flushes 2023-07-22 18:11:40,767 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e. 2023-07-22 18:11:40,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e. 2023-07-22 18:11:40,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e. after waiting 0 ms 2023-07-22 18:11:40,767 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e. 2023-07-22 18:11:40,771 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/1dd0f0f8e083b3a547085dafd6549986/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:40,772 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/38b4582f463aa5141cee0a119a130a5e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:40,772 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986. 2023-07-22 18:11:40,772 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1dd0f0f8e083b3a547085dafd6549986: 2023-07-22 18:11:40,772 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e. 2023-07-22 18:11:40,772 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 38b4582f463aa5141cee0a119a130a5e: 2023-07-22 18:11:40,774 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1dd0f0f8e083b3a547085dafd6549986 2023-07-22 18:11:40,774 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=1dd0f0f8e083b3a547085dafd6549986, regionState=CLOSED 2023-07-22 18:11:40,774 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cc3e46be1d1407e823960e6f5e389a30 2023-07-22 18:11:40,775 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690049500774"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049500774"}]},"ts":"1690049500774"} 2023-07-22 18:11:40,776 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cc3e46be1d1407e823960e6f5e389a30, disabling compactions & flushes 2023-07-22 18:11:40,776 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30. 2023-07-22 18:11:40,776 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30. 2023-07-22 18:11:40,776 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30. after waiting 0 ms 2023-07-22 18:11:40,776 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30. 2023-07-22 18:11:40,776 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 38b4582f463aa5141cee0a119a130a5e 2023-07-22 18:11:40,776 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 8a7eb60124f79e2681d99edaf849e293 2023-07-22 18:11:40,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 8a7eb60124f79e2681d99edaf849e293, disabling compactions & flushes 2023-07-22 18:11:40,783 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293. 2023-07-22 18:11:40,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293. 2023-07-22 18:11:40,783 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=38b4582f463aa5141cee0a119a130a5e, regionState=CLOSED 2023-07-22 18:11:40,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293. after waiting 0 ms 2023-07-22 18:11:40,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293. 2023-07-22 18:11:40,783 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690049500783"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049500783"}]},"ts":"1690049500783"} 2023-07-22 18:11:40,787 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=146 2023-07-22 18:11:40,787 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=146, state=SUCCESS; CloseRegionProcedure 1dd0f0f8e083b3a547085dafd6549986, server=jenkins-hbase4.apache.org,38977,1690049474061 in 171 msec 2023-07-22 18:11:40,789 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=145 2023-07-22 18:11:40,789 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=1dd0f0f8e083b3a547085dafd6549986, UNASSIGN in 180 msec 2023-07-22 18:11:40,789 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=145, state=SUCCESS; CloseRegionProcedure 38b4582f463aa5141cee0a119a130a5e, server=jenkins-hbase4.apache.org,45471,1690049478954 in 171 msec 2023-07-22 18:11:40,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/8a7eb60124f79e2681d99edaf849e293/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:40,790 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/cc3e46be1d1407e823960e6f5e389a30/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:40,791 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293. 2023-07-22 18:11:40,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 8a7eb60124f79e2681d99edaf849e293: 2023-07-22 18:11:40,791 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30. 2023-07-22 18:11:40,791 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cc3e46be1d1407e823960e6f5e389a30: 2023-07-22 18:11:40,791 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=38b4582f463aa5141cee0a119a130a5e, UNASSIGN in 182 msec 2023-07-22 18:11:40,792 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 8a7eb60124f79e2681d99edaf849e293 2023-07-22 18:11:40,792 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 68071b2a0b4e53417e3c03cc53e205f9 2023-07-22 18:11:40,793 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 68071b2a0b4e53417e3c03cc53e205f9, disabling compactions & flushes 2023-07-22 18:11:40,794 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9. 2023-07-22 18:11:40,794 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9. 2023-07-22 18:11:40,794 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9. after waiting 0 ms 2023-07-22 18:11:40,794 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9. 2023-07-22 18:11:40,795 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=8a7eb60124f79e2681d99edaf849e293, regionState=CLOSED 2023-07-22 18:11:40,795 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690049500795"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049500795"}]},"ts":"1690049500795"} 2023-07-22 18:11:40,795 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cc3e46be1d1407e823960e6f5e389a30 2023-07-22 18:11:40,796 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=cc3e46be1d1407e823960e6f5e389a30, regionState=CLOSED 2023-07-22 18:11:40,797 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1690049500796"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049500796"}]},"ts":"1690049500796"} 2023-07-22 18:11:40,800 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/Group_testDisabledTableMove/68071b2a0b4e53417e3c03cc53e205f9/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:40,801 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9. 2023-07-22 18:11:40,801 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 68071b2a0b4e53417e3c03cc53e205f9: 2023-07-22 18:11:40,801 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=148 2023-07-22 18:11:40,801 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=148, state=SUCCESS; CloseRegionProcedure 8a7eb60124f79e2681d99edaf849e293, server=jenkins-hbase4.apache.org,45471,1690049478954 in 182 msec 2023-07-22 18:11:40,802 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=147 2023-07-22 18:11:40,802 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=147, state=SUCCESS; CloseRegionProcedure cc3e46be1d1407e823960e6f5e389a30, server=jenkins-hbase4.apache.org,38977,1690049474061 in 183 msec 2023-07-22 18:11:40,803 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 68071b2a0b4e53417e3c03cc53e205f9 2023-07-22 18:11:40,803 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=8a7eb60124f79e2681d99edaf849e293, UNASSIGN in 194 msec 2023-07-22 18:11:40,803 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=68071b2a0b4e53417e3c03cc53e205f9, regionState=CLOSED 2023-07-22 18:11:40,803 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1690049500803"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049500803"}]},"ts":"1690049500803"} 2023-07-22 18:11:40,804 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cc3e46be1d1407e823960e6f5e389a30, UNASSIGN in 195 msec 2023-07-22 18:11:40,807 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=144 2023-07-22 18:11:40,807 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=144, state=SUCCESS; CloseRegionProcedure 68071b2a0b4e53417e3c03cc53e205f9, server=jenkins-hbase4.apache.org,45471,1690049478954 in 192 msec 2023-07-22 18:11:40,809 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=143 2023-07-22 18:11:40,809 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=68071b2a0b4e53417e3c03cc53e205f9, UNASSIGN in 200 msec 2023-07-22 18:11:40,811 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049500811"}]},"ts":"1690049500811"} 2023-07-22 18:11:40,812 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-22 18:11:40,813 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-22 18:11:40,816 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 216 msec 2023-07-22 18:11:40,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-22 18:11:40,904 INFO [Listener at localhost/37829] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 143 completed 2023-07-22 18:11:40,904 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1131336537 2023-07-22 18:11:40,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1131336537 2023-07-22 18:11:40,909 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1131336537 2023-07-22 18:11:40,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:40,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:40,910 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:40,912 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-22 18:11:40,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1131336537, current retry=0 2023-07-22 18:11:40,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1131336537. 2023-07-22 18:11:40,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:40,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:40,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:40,918 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-22 18:11:40,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 18:11:40,920 INFO [Listener at localhost/37829] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-22 18:11:40,921 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-22 18:11:40,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:40,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] ipc.CallRunner(144): callId: 919 service: MasterService methodName: DisableTable size: 89 connection: 172.31.14.131:34802 deadline: 1690049560920, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-22 18:11:40,922 DEBUG [Listener at localhost/37829] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-22 18:11:40,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-22 18:11:40,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] procedure2.ProcedureExecutor(1029): Stored pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-22 18:11:40,925 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-22 18:11:40,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1131336537' 2023-07-22 18:11:40,926 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=155, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-22 18:11:40,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1131336537 2023-07-22 18:11:40,928 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:40,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:40,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:40,935 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/68071b2a0b4e53417e3c03cc53e205f9 2023-07-22 18:11:40,935 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/38b4582f463aa5141cee0a119a130a5e 2023-07-22 18:11:40,935 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/1dd0f0f8e083b3a547085dafd6549986 2023-07-22 18:11:40,936 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/cc3e46be1d1407e823960e6f5e389a30 2023-07-22 18:11:40,936 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/8a7eb60124f79e2681d99edaf849e293 2023-07-22 18:11:40,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-22 18:11:40,939 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/38b4582f463aa5141cee0a119a130a5e/f, FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/38b4582f463aa5141cee0a119a130a5e/recovered.edits] 2023-07-22 18:11:40,939 DEBUG [HFileArchiver-4] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/1dd0f0f8e083b3a547085dafd6549986/f, FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/1dd0f0f8e083b3a547085dafd6549986/recovered.edits] 2023-07-22 18:11:40,939 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/68071b2a0b4e53417e3c03cc53e205f9/f, FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/68071b2a0b4e53417e3c03cc53e205f9/recovered.edits] 2023-07-22 18:11:40,940 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/8a7eb60124f79e2681d99edaf849e293/f, FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/8a7eb60124f79e2681d99edaf849e293/recovered.edits] 2023-07-22 18:11:40,940 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/cc3e46be1d1407e823960e6f5e389a30/f, FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/cc3e46be1d1407e823960e6f5e389a30/recovered.edits] 2023-07-22 18:11:40,951 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/38b4582f463aa5141cee0a119a130a5e/recovered.edits/4.seqid to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/archive/data/default/Group_testDisabledTableMove/38b4582f463aa5141cee0a119a130a5e/recovered.edits/4.seqid 2023-07-22 18:11:40,952 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/68071b2a0b4e53417e3c03cc53e205f9/recovered.edits/4.seqid to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/archive/data/default/Group_testDisabledTableMove/68071b2a0b4e53417e3c03cc53e205f9/recovered.edits/4.seqid 2023-07-22 18:11:40,952 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/38b4582f463aa5141cee0a119a130a5e 2023-07-22 18:11:40,953 DEBUG [HFileArchiver-4] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/1dd0f0f8e083b3a547085dafd6549986/recovered.edits/4.seqid to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/archive/data/default/Group_testDisabledTableMove/1dd0f0f8e083b3a547085dafd6549986/recovered.edits/4.seqid 2023-07-22 18:11:40,955 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/68071b2a0b4e53417e3c03cc53e205f9 2023-07-22 18:11:40,955 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/8a7eb60124f79e2681d99edaf849e293/recovered.edits/4.seqid to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/archive/data/default/Group_testDisabledTableMove/8a7eb60124f79e2681d99edaf849e293/recovered.edits/4.seqid 2023-07-22 18:11:40,955 DEBUG [HFileArchiver-4] backup.HFileArchiver(596): Deleted hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/1dd0f0f8e083b3a547085dafd6549986 2023-07-22 18:11:40,956 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/cc3e46be1d1407e823960e6f5e389a30/recovered.edits/4.seqid to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/archive/data/default/Group_testDisabledTableMove/cc3e46be1d1407e823960e6f5e389a30/recovered.edits/4.seqid 2023-07-22 18:11:40,956 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/8a7eb60124f79e2681d99edaf849e293 2023-07-22 18:11:40,957 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/.tmp/data/default/Group_testDisabledTableMove/cc3e46be1d1407e823960e6f5e389a30 2023-07-22 18:11:40,957 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-22 18:11:40,959 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=155, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-22 18:11:40,962 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-22 18:11:40,968 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-22 18:11:40,969 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=155, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-22 18:11:40,969 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-22 18:11:40,969 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049500969"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:40,970 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049500969"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:40,970 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049500969"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:40,970 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049500969"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:40,970 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049500969"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:40,973 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-22 18:11:40,973 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 68071b2a0b4e53417e3c03cc53e205f9, NAME => 'Group_testDisabledTableMove,,1690049499978.68071b2a0b4e53417e3c03cc53e205f9.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 38b4582f463aa5141cee0a119a130a5e, NAME => 'Group_testDisabledTableMove,aaaaa,1690049499978.38b4582f463aa5141cee0a119a130a5e.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 1dd0f0f8e083b3a547085dafd6549986, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1690049499978.1dd0f0f8e083b3a547085dafd6549986.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => cc3e46be1d1407e823960e6f5e389a30, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1690049499978.cc3e46be1d1407e823960e6f5e389a30.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 8a7eb60124f79e2681d99edaf849e293, NAME => 'Group_testDisabledTableMove,zzzzz,1690049499978.8a7eb60124f79e2681d99edaf849e293.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-22 18:11:40,973 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-22 18:11:40,973 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690049500973"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:40,975 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-22 18:11:40,977 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=155, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-22 18:11:40,978 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=155, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 54 msec 2023-07-22 18:11:41,039 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-22 18:11:41,039 INFO [Listener at localhost/37829] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 155 completed 2023-07-22 18:11:41,044 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:41,044 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:41,045 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:41,046 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:41,046 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:41,047 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507] to rsgroup default 2023-07-22 18:11:41,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1131336537 2023-07-22 18:11:41,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:41,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:41,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:41,053 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1131336537, current retry=0 2023-07-22 18:11:41,053 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33411,1690049473844, jenkins-hbase4.apache.org,38507,1690049474291] are moved back to Group_testDisabledTableMove_1131336537 2023-07-22 18:11:41,053 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1131336537 => default 2023-07-22 18:11:41,053 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:41,054 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_1131336537 2023-07-22 18:11:41,058 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:41,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:41,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-22 18:11:41,069 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:41,070 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:41,070 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:41,070 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:41,071 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:41,071 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:41,072 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:41,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:41,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:41,079 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:41,082 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:41,083 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:41,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:41,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:41,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:41,095 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:41,102 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:41,102 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:41,105 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40289] to rsgroup master 2023-07-22 18:11:41,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:41,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] ipc.CallRunner(144): callId: 953 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34802 deadline: 1690050701104, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. 2023-07-22 18:11:41,105 WARN [Listener at localhost/37829] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:41,108 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:41,109 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:41,109 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:41,109 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:45471], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:41,110 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:41,110 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:41,144 INFO [Listener at localhost/37829] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=513 (was 511) Potentially hanging thread: hconnection-0x5a2c0b37-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x744a8a1-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_276401251_17 at /127.0.0.1:51896 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_739985057_17 at /127.0.0.1:50422 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=787 (was 775) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=420 (was 438), ProcessCount=172 (was 172), AvailableMemoryMB=8359 (was 8374) 2023-07-22 18:11:41,145 WARN [Listener at localhost/37829] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-22 18:11:41,168 INFO [Listener at localhost/37829] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=513, OpenFileDescriptor=787, MaxFileDescriptor=60000, SystemLoadAverage=420, ProcessCount=172, AvailableMemoryMB=8358 2023-07-22 18:11:41,168 WARN [Listener at localhost/37829] hbase.ResourceChecker(130): Thread=513 is superior to 500 2023-07-22 18:11:41,168 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-22 18:11:41,172 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:41,172 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:41,173 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:41,173 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:41,173 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:41,174 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:41,174 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:41,175 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:41,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:41,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:41,181 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:41,184 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:41,185 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:41,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:41,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:41,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:41,194 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:41,196 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:41,197 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:41,199 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:40289] to rsgroup master 2023-07-22 18:11:41,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:41,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] ipc.CallRunner(144): callId: 981 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:34802 deadline: 1690050701198, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. 2023-07-22 18:11:41,199 WARN [Listener at localhost/37829] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:40289 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:41,201 INFO [Listener at localhost/37829] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:41,202 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:41,203 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:41,203 INFO [Listener at localhost/37829] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33411, jenkins-hbase4.apache.org:38507, jenkins-hbase4.apache.org:38977, jenkins-hbase4.apache.org:45471], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:41,204 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:41,204 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=40289] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:41,204 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-22 18:11:41,204 INFO [Listener at localhost/37829] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-22 18:11:41,205 DEBUG [Listener at localhost/37829] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x702c0ae8 to 127.0.0.1:62144 2023-07-22 18:11:41,205 DEBUG [Listener at localhost/37829] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:41,207 DEBUG [Listener at localhost/37829] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-22 18:11:41,207 DEBUG [Listener at localhost/37829] util.JVMClusterUtil(257): Found active master hash=973291378, stopped=false 2023-07-22 18:11:41,207 DEBUG [Listener at localhost/37829] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-22 18:11:41,207 DEBUG [Listener at localhost/37829] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-22 18:11:41,207 INFO [Listener at localhost/37829] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,40289,1690049471773 2023-07-22 18:11:41,213 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:38977-0x1018e3ae4b00002, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:41,213 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:41,213 INFO [Listener at localhost/37829] procedure2.ProcedureExecutor(629): Stopping 2023-07-22 18:11:41,213 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:41,213 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:41,213 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:45471-0x1018e3ae4b0000b, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:41,213 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:41,213 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38977-0x1018e3ae4b00002, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:41,213 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:41,213 DEBUG [Listener at localhost/37829] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x20eaf84c to 127.0.0.1:62144 2023-07-22 18:11:41,214 DEBUG [Listener at localhost/37829] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:41,214 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:41,214 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:41,214 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:45471-0x1018e3ae4b0000b, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:41,214 INFO [Listener at localhost/37829] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33411,1690049473844' ***** 2023-07-22 18:11:41,214 INFO [Listener at localhost/37829] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 18:11:41,214 INFO [RS:0;jenkins-hbase4:33411] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 18:11:41,216 INFO [Listener at localhost/37829] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38977,1690049474061' ***** 2023-07-22 18:11:41,216 INFO [Listener at localhost/37829] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 18:11:41,217 INFO [Listener at localhost/37829] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38507,1690049474291' ***** 2023-07-22 18:11:41,217 INFO [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 18:11:41,217 INFO [Listener at localhost/37829] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 18:11:41,217 INFO [RS:2;jenkins-hbase4:38507] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 18:11:41,220 INFO [Listener at localhost/37829] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45471,1690049478954' ***** 2023-07-22 18:11:41,221 INFO [Listener at localhost/37829] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 18:11:41,222 INFO [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 18:11:41,235 INFO [RS:1;jenkins-hbase4:38977] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1b56eac1{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:41,235 INFO [RS:3;jenkins-hbase4:45471] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@780935ef{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:41,235 INFO [RS:0;jenkins-hbase4:33411] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@6a9e2012{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:41,235 INFO [RS:2;jenkins-hbase4:38507] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7cc51cf3{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:41,239 INFO [RS:0;jenkins-hbase4:33411] server.AbstractConnector(383): Stopped ServerConnector@69156046{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 18:11:41,239 INFO [RS:1;jenkins-hbase4:38977] server.AbstractConnector(383): Stopped ServerConnector@5d0a3d54{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 18:11:41,239 INFO [RS:3;jenkins-hbase4:45471] server.AbstractConnector(383): Stopped ServerConnector@6057e31f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 18:11:41,239 INFO [RS:2;jenkins-hbase4:38507] server.AbstractConnector(383): Stopped ServerConnector@3023e605{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 18:11:41,239 INFO [RS:3;jenkins-hbase4:45471] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 18:11:41,239 INFO [RS:1;jenkins-hbase4:38977] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 18:11:41,239 INFO [RS:0;jenkins-hbase4:33411] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 18:11:41,239 INFO [RS:2;jenkins-hbase4:38507] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 18:11:41,240 INFO [RS:1;jenkins-hbase4:38977] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@33446a5{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 18:11:41,240 INFO [RS:3;jenkins-hbase4:45471] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6dd20c46{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 18:11:41,242 INFO [RS:0;jenkins-hbase4:33411] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@53b9762b{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 18:11:41,242 INFO [RS:3;jenkins-hbase4:45471] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2c02caab{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/hadoop.log.dir/,STOPPED} 2023-07-22 18:11:41,243 INFO [RS:0;jenkins-hbase4:33411] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2ffb745f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/hadoop.log.dir/,STOPPED} 2023-07-22 18:11:41,242 INFO [RS:1;jenkins-hbase4:38977] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@19459c3b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/hadoop.log.dir/,STOPPED} 2023-07-22 18:11:41,241 INFO [RS:2;jenkins-hbase4:38507] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2855a58d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 18:11:41,244 INFO [RS:2;jenkins-hbase4:38507] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5f20ff62{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/hadoop.log.dir/,STOPPED} 2023-07-22 18:11:41,246 INFO [RS:0;jenkins-hbase4:33411] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 18:11:41,247 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 18:11:41,247 INFO [RS:0;jenkins-hbase4:33411] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 18:11:41,247 INFO [RS:2;jenkins-hbase4:38507] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 18:11:41,248 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 18:11:41,248 INFO [RS:2;jenkins-hbase4:38507] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 18:11:41,248 INFO [RS:3;jenkins-hbase4:45471] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 18:11:41,248 INFO [RS:2;jenkins-hbase4:38507] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 18:11:41,248 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 18:11:41,248 INFO [RS:2;jenkins-hbase4:38507] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:41,248 INFO [RS:0;jenkins-hbase4:33411] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 18:11:41,249 INFO [RS:0;jenkins-hbase4:33411] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:41,249 INFO [RS:1;jenkins-hbase4:38977] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 18:11:41,248 INFO [RS:3;jenkins-hbase4:45471] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 18:11:41,249 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 18:11:41,248 DEBUG [RS:2;jenkins-hbase4:38507] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5d48fa3b to 127.0.0.1:62144 2023-07-22 18:11:41,249 INFO [RS:1;jenkins-hbase4:38977] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 18:11:41,249 INFO [RS:1;jenkins-hbase4:38977] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 18:11:41,249 INFO [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer(3305): Received CLOSE for 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:41,249 DEBUG [RS:0;jenkins-hbase4:33411] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4053758f to 127.0.0.1:62144 2023-07-22 18:11:41,249 DEBUG [RS:0;jenkins-hbase4:33411] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:41,250 INFO [RS:0;jenkins-hbase4:33411] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33411,1690049473844; all regions closed. 2023-07-22 18:11:41,249 INFO [RS:3;jenkins-hbase4:45471] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 18:11:41,249 DEBUG [RS:2;jenkins-hbase4:38507] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:41,252 INFO [RS:2;jenkins-hbase4:38507] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38507,1690049474291; all regions closed. 2023-07-22 18:11:41,252 INFO [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(3305): Received CLOSE for ca604f964db2e93cbe231535895107a6 2023-07-22 18:11:41,252 INFO [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(3305): Received CLOSE for fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:41,252 INFO [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(3305): Received CLOSE for 7f23a00cc0ec3efc597549946ee0206d 2023-07-22 18:11:41,252 INFO [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:41,252 DEBUG [RS:3;jenkins-hbase4:45471] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3baf0687 to 127.0.0.1:62144 2023-07-22 18:11:41,253 DEBUG [RS:3;jenkins-hbase4:45471] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:41,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing ca604f964db2e93cbe231535895107a6, disabling compactions & flushes 2023-07-22 18:11:41,253 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. 2023-07-22 18:11:41,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. 2023-07-22 18:11:41,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. after waiting 0 ms 2023-07-22 18:11:41,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. 2023-07-22 18:11:41,253 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing ca604f964db2e93cbe231535895107a6 1/1 column families, dataSize=27.09 KB heapSize=44.70 KB 2023-07-22 18:11:41,257 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:41,257 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:41,252 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:41,251 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5ad742ffe0eb92d1c27e3c0036eef8fa, disabling compactions & flushes 2023-07-22 18:11:41,251 INFO [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:41,258 DEBUG [RS:1;jenkins-hbase4:38977] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3b547dda to 127.0.0.1:62144 2023-07-22 18:11:41,258 DEBUG [RS:1;jenkins-hbase4:38977] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:41,258 INFO [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-22 18:11:41,258 DEBUG [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer(1478): Online Regions={5ad742ffe0eb92d1c27e3c0036eef8fa=testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa.} 2023-07-22 18:11:41,259 DEBUG [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer(1504): Waiting on 5ad742ffe0eb92d1c27e3c0036eef8fa 2023-07-22 18:11:41,258 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:41,263 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:41,263 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. after waiting 0 ms 2023-07-22 18:11:41,263 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:41,257 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:41,253 INFO [RS:3;jenkins-hbase4:45471] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 18:11:41,263 INFO [RS:3;jenkins-hbase4:45471] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 18:11:41,263 INFO [RS:3;jenkins-hbase4:45471] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 18:11:41,263 INFO [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-22 18:11:41,266 INFO [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-22 18:11:41,267 DEBUG [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(1478): Online Regions={ca604f964db2e93cbe231535895107a6=hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6., 1588230740=hbase:meta,,1.1588230740, fe5e9f07ec9c7007b36085471b5cd477=hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477., 7f23a00cc0ec3efc597549946ee0206d=unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d.} 2023-07-22 18:11:41,267 DEBUG [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(1504): Waiting on 1588230740, 7f23a00cc0ec3efc597549946ee0206d, ca604f964db2e93cbe231535895107a6, fe5e9f07ec9c7007b36085471b5cd477 2023-07-22 18:11:41,267 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-22 18:11:41,267 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-22 18:11:41,267 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-22 18:11:41,267 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-22 18:11:41,267 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-22 18:11:41,267 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=76.59 KB heapSize=120.50 KB 2023-07-22 18:11:41,298 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/testRename/5ad742ffe0eb92d1c27e3c0036eef8fa/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-22 18:11:41,302 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:41,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5ad742ffe0eb92d1c27e3c0036eef8fa: 2023-07-22 18:11:41,302 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1690049494406.5ad742ffe0eb92d1c27e3c0036eef8fa. 2023-07-22 18:11:41,313 DEBUG [RS:2;jenkins-hbase4:38507] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/oldWALs 2023-07-22 18:11:41,313 INFO [RS:2;jenkins-hbase4:38507] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38507%2C1690049474291:(num 1690049476565) 2023-07-22 18:11:41,313 DEBUG [RS:2;jenkins-hbase4:38507] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:41,313 INFO [RS:2;jenkins-hbase4:38507] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:41,319 INFO [RS:2;jenkins-hbase4:38507] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-22 18:11:41,320 INFO [RS:2;jenkins-hbase4:38507] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 18:11:41,320 INFO [RS:2;jenkins-hbase4:38507] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 18:11:41,320 INFO [RS:2;jenkins-hbase4:38507] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 18:11:41,320 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 18:11:41,322 INFO [RS:2;jenkins-hbase4:38507] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38507 2023-07-22 18:11:41,327 DEBUG [RS:0;jenkins-hbase4:33411] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/oldWALs 2023-07-22 18:11:41,327 INFO [RS:0;jenkins-hbase4:33411] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33411%2C1690049473844.meta:.meta(num 1690049476903) 2023-07-22 18:11:41,329 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:41,329 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:41,329 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:45471-0x1018e3ae4b0000b, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:41,329 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:41,329 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:45471-0x1018e3ae4b0000b, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:41,329 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:41,329 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:41,329 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:38977-0x1018e3ae4b00002, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38507,1690049474291 2023-07-22 18:11:41,329 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:38977-0x1018e3ae4b00002, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:41,330 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38507,1690049474291] 2023-07-22 18:11:41,330 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38507,1690049474291; numProcessing=1 2023-07-22 18:11:41,331 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=27.09 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6/.tmp/m/3aa904213e854f22bf473ea8331ffa2b 2023-07-22 18:11:41,332 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38507,1690049474291 already deleted, retry=false 2023-07-22 18:11:41,332 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38507,1690049474291 expired; onlineServers=3 2023-07-22 18:11:41,339 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=70.78 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/.tmp/info/95d6294d28134918a761eaf05aa2c937 2023-07-22 18:11:41,346 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3aa904213e854f22bf473ea8331ffa2b 2023-07-22 18:11:41,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6/.tmp/m/3aa904213e854f22bf473ea8331ffa2b as hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6/m/3aa904213e854f22bf473ea8331ffa2b 2023-07-22 18:11:41,348 DEBUG [RS:0;jenkins-hbase4:33411] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/oldWALs 2023-07-22 18:11:41,348 INFO [RS:0;jenkins-hbase4:33411] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33411%2C1690049473844:(num 1690049476565) 2023-07-22 18:11:41,348 DEBUG [RS:0;jenkins-hbase4:33411] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:41,348 INFO [RS:0;jenkins-hbase4:33411] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:41,348 INFO [RS:0;jenkins-hbase4:33411] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-22 18:11:41,349 INFO [RS:0;jenkins-hbase4:33411] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 18:11:41,349 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 18:11:41,349 INFO [RS:0;jenkins-hbase4:33411] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 18:11:41,349 INFO [RS:0;jenkins-hbase4:33411] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 18:11:41,350 INFO [RS:0;jenkins-hbase4:33411] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33411 2023-07-22 18:11:41,351 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 95d6294d28134918a761eaf05aa2c937 2023-07-22 18:11:41,355 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:45471-0x1018e3ae4b0000b, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:41,355 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:38977-0x1018e3ae4b00002, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:41,355 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33411,1690049473844 2023-07-22 18:11:41,355 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:41,357 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33411,1690049473844] 2023-07-22 18:11:41,357 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33411,1690049473844; numProcessing=2 2023-07-22 18:11:41,357 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 3aa904213e854f22bf473ea8331ffa2b 2023-07-22 18:11:41,358 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6/m/3aa904213e854f22bf473ea8331ffa2b, entries=28, sequenceid=101, filesize=6.1 K 2023-07-22 18:11:41,359 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33411,1690049473844 already deleted, retry=false 2023-07-22 18:11:41,359 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33411,1690049473844 expired; onlineServers=2 2023-07-22 18:11:41,359 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~27.09 KB/27740, heapSize ~44.68 KB/45752, currentSize=0 B/0 for ca604f964db2e93cbe231535895107a6 in 106ms, sequenceid=101, compaction requested=false 2023-07-22 18:11:41,371 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/rsgroup/ca604f964db2e93cbe231535895107a6/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=12 2023-07-22 18:11:41,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 18:11:41,373 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. 2023-07-22 18:11:41,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for ca604f964db2e93cbe231535895107a6: 2023-07-22 18:11:41,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690049478087.ca604f964db2e93cbe231535895107a6. 2023-07-22 18:11:41,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fe5e9f07ec9c7007b36085471b5cd477, disabling compactions & flushes 2023-07-22 18:11:41,373 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. 2023-07-22 18:11:41,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. 2023-07-22 18:11:41,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. after waiting 0 ms 2023-07-22 18:11:41,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. 2023-07-22 18:11:41,387 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/.tmp/rep_barrier/06dad3d34aaf4f6f92daf2bb0e9aad4e 2023-07-22 18:11:41,398 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 06dad3d34aaf4f6f92daf2bb0e9aad4e 2023-07-22 18:11:41,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/namespace/fe5e9f07ec9c7007b36085471b5cd477/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-22 18:11:41,407 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. 2023-07-22 18:11:41,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fe5e9f07ec9c7007b36085471b5cd477: 2023-07-22 18:11:41,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690049477237.fe5e9f07ec9c7007b36085471b5cd477. 2023-07-22 18:11:41,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 7f23a00cc0ec3efc597549946ee0206d, disabling compactions & flushes 2023-07-22 18:11:41,408 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:41,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:41,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. after waiting 0 ms 2023-07-22 18:11:41,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:41,421 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/default/unmovedTable/7f23a00cc0ec3efc597549946ee0206d/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-22 18:11:41,423 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:41,423 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 7f23a00cc0ec3efc597549946ee0206d: 2023-07-22 18:11:41,423 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1690049496062.7f23a00cc0ec3efc597549946ee0206d. 2023-07-22 18:11:41,426 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.81 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/.tmp/table/0ade988015db4cce83bcd0dada9e83f6 2023-07-22 18:11:41,433 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0ade988015db4cce83bcd0dada9e83f6 2023-07-22 18:11:41,434 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/.tmp/info/95d6294d28134918a761eaf05aa2c937 as hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/info/95d6294d28134918a761eaf05aa2c937 2023-07-22 18:11:41,435 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-22 18:11:41,435 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-22 18:11:41,442 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 95d6294d28134918a761eaf05aa2c937 2023-07-22 18:11:41,442 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/info/95d6294d28134918a761eaf05aa2c937, entries=93, sequenceid=210, filesize=15.5 K 2023-07-22 18:11:41,443 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/.tmp/rep_barrier/06dad3d34aaf4f6f92daf2bb0e9aad4e as hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/rep_barrier/06dad3d34aaf4f6f92daf2bb0e9aad4e 2023-07-22 18:11:41,449 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 06dad3d34aaf4f6f92daf2bb0e9aad4e 2023-07-22 18:11:41,450 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/rep_barrier/06dad3d34aaf4f6f92daf2bb0e9aad4e, entries=18, sequenceid=210, filesize=6.9 K 2023-07-22 18:11:41,451 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/.tmp/table/0ade988015db4cce83bcd0dada9e83f6 as hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/table/0ade988015db4cce83bcd0dada9e83f6 2023-07-22 18:11:41,458 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:41,458 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:33411-0x1018e3ae4b00001, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:41,458 INFO [RS:0;jenkins-hbase4:33411] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33411,1690049473844; zookeeper connection closed. 2023-07-22 18:11:41,458 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@39a61b62] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@39a61b62 2023-07-22 18:11:41,459 INFO [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38977,1690049474061; all regions closed. 2023-07-22 18:11:41,467 DEBUG [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-22 18:11:41,476 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 0ade988015db4cce83bcd0dada9e83f6 2023-07-22 18:11:41,476 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/table/0ade988015db4cce83bcd0dada9e83f6, entries=27, sequenceid=210, filesize=7.2 K 2023-07-22 18:11:41,477 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~76.59 KB/78427, heapSize ~120.45 KB/123344, currentSize=0 B/0 for 1588230740 in 210ms, sequenceid=210, compaction requested=false 2023-07-22 18:11:41,481 DEBUG [RS:1;jenkins-hbase4:38977] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/oldWALs 2023-07-22 18:11:41,481 INFO [RS:1;jenkins-hbase4:38977] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38977%2C1690049474061:(num 1690049476557) 2023-07-22 18:11:41,481 DEBUG [RS:1;jenkins-hbase4:38977] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:41,481 INFO [RS:1;jenkins-hbase4:38977] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:41,483 INFO [RS:1;jenkins-hbase4:38977] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-22 18:11:41,483 INFO [RS:1;jenkins-hbase4:38977] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 18:11:41,483 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 18:11:41,483 INFO [RS:1;jenkins-hbase4:38977] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 18:11:41,484 INFO [RS:1;jenkins-hbase4:38977] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 18:11:41,486 INFO [RS:1;jenkins-hbase4:38977] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38977 2023-07-22 18:11:41,489 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:38977-0x1018e3ae4b00002, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:41,489 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:45471-0x1018e3ae4b0000b, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38977,1690049474061 2023-07-22 18:11:41,489 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:41,490 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38977,1690049474061] 2023-07-22 18:11:41,490 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38977,1690049474061; numProcessing=3 2023-07-22 18:11:41,491 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38977,1690049474061 already deleted, retry=false 2023-07-22 18:11:41,491 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38977,1690049474061 expired; onlineServers=1 2023-07-22 18:11:41,501 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/data/hbase/meta/1588230740/recovered.edits/213.seqid, newMaxSeqId=213, maxSeqId=18 2023-07-22 18:11:41,502 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 18:11:41,503 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-22 18:11:41,503 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-22 18:11:41,503 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-22 18:11:41,546 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-22 18:11:41,546 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-22 18:11:41,667 INFO [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45471,1690049478954; all regions closed. 2023-07-22 18:11:41,674 DEBUG [RS:3;jenkins-hbase4:45471] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/oldWALs 2023-07-22 18:11:41,674 INFO [RS:3;jenkins-hbase4:45471] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45471%2C1690049478954.meta:.meta(num 1690049480276) 2023-07-22 18:11:41,680 DEBUG [RS:3;jenkins-hbase4:45471] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/oldWALs 2023-07-22 18:11:41,681 INFO [RS:3;jenkins-hbase4:45471] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C45471%2C1690049478954:(num 1690049479579) 2023-07-22 18:11:41,681 DEBUG [RS:3;jenkins-hbase4:45471] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:41,681 INFO [RS:3;jenkins-hbase4:45471] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:41,681 INFO [RS:3;jenkins-hbase4:45471] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-22 18:11:41,681 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 18:11:41,682 INFO [RS:3;jenkins-hbase4:45471] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45471 2023-07-22 18:11:41,684 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:45471-0x1018e3ae4b0000b, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,45471,1690049478954 2023-07-22 18:11:41,684 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:41,685 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,45471,1690049478954] 2023-07-22 18:11:41,685 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,45471,1690049478954; numProcessing=4 2023-07-22 18:11:41,686 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,45471,1690049478954 already deleted, retry=false 2023-07-22 18:11:41,686 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,45471,1690049478954 expired; onlineServers=0 2023-07-22 18:11:41,686 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,40289,1690049471773' ***** 2023-07-22 18:11:41,686 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-22 18:11:41,687 DEBUG [M:0;jenkins-hbase4:40289] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7b3e3549, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 18:11:41,687 INFO [M:0;jenkins-hbase4:40289] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 18:11:41,689 INFO [M:0;jenkins-hbase4:40289] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@66df7ad1{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-22 18:11:41,690 INFO [M:0;jenkins-hbase4:40289] server.AbstractConnector(383): Stopped ServerConnector@631e341c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 18:11:41,690 INFO [M:0;jenkins-hbase4:40289] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 18:11:41,690 INFO [M:0;jenkins-hbase4:40289] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7b463d55{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 18:11:41,691 INFO [M:0;jenkins-hbase4:40289] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@563e1db6{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/hadoop.log.dir/,STOPPED} 2023-07-22 18:11:41,691 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-22 18:11:41,691 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:41,691 INFO [M:0;jenkins-hbase4:40289] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,40289,1690049471773 2023-07-22 18:11:41,691 INFO [M:0;jenkins-hbase4:40289] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,40289,1690049471773; all regions closed. 2023-07-22 18:11:41,691 DEBUG [M:0;jenkins-hbase4:40289] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:41,692 INFO [M:0;jenkins-hbase4:40289] master.HMaster(1491): Stopping master jetty server 2023-07-22 18:11:41,692 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 18:11:41,693 INFO [M:0;jenkins-hbase4:40289] server.AbstractConnector(383): Stopped ServerConnector@5e228df9{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 18:11:41,694 DEBUG [M:0;jenkins-hbase4:40289] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-22 18:11:41,694 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-22 18:11:41,694 DEBUG [M:0;jenkins-hbase4:40289] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-22 18:11:41,694 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690049476008] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690049476008,5,FailOnTimeoutGroup] 2023-07-22 18:11:41,694 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690049476007] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690049476007,5,FailOnTimeoutGroup] 2023-07-22 18:11:41,694 INFO [M:0;jenkins-hbase4:40289] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-22 18:11:41,694 INFO [M:0;jenkins-hbase4:40289] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-22 18:11:41,694 INFO [M:0;jenkins-hbase4:40289] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-22 18:11:41,694 DEBUG [M:0;jenkins-hbase4:40289] master.HMaster(1512): Stopping service threads 2023-07-22 18:11:41,694 INFO [M:0;jenkins-hbase4:40289] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-22 18:11:41,695 ERROR [M:0;jenkins-hbase4:40289] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-22 18:11:41,695 INFO [M:0;jenkins-hbase4:40289] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-22 18:11:41,696 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-22 18:11:41,696 DEBUG [M:0;jenkins-hbase4:40289] zookeeper.ZKUtil(398): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-22 18:11:41,696 WARN [M:0;jenkins-hbase4:40289] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-22 18:11:41,696 INFO [M:0;jenkins-hbase4:40289] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-22 18:11:41,696 INFO [M:0;jenkins-hbase4:40289] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-22 18:11:41,696 DEBUG [M:0;jenkins-hbase4:40289] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-22 18:11:41,696 INFO [M:0;jenkins-hbase4:40289] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:41,696 DEBUG [M:0;jenkins-hbase4:40289] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:41,696 DEBUG [M:0;jenkins-hbase4:40289] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-22 18:11:41,696 DEBUG [M:0;jenkins-hbase4:40289] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:41,697 INFO [M:0;jenkins-hbase4:40289] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=519.22 KB heapSize=621.37 KB 2023-07-22 18:11:41,720 INFO [M:0;jenkins-hbase4:40289] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=519.22 KB at sequenceid=1152 (bloomFilter=true), to=hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/afb7aa0f6fea4fd18d5373b7f820bbc0 2023-07-22 18:11:41,728 DEBUG [M:0;jenkins-hbase4:40289] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/afb7aa0f6fea4fd18d5373b7f820bbc0 as hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/afb7aa0f6fea4fd18d5373b7f820bbc0 2023-07-22 18:11:41,734 INFO [M:0;jenkins-hbase4:40289] regionserver.HStore(1080): Added hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/afb7aa0f6fea4fd18d5373b7f820bbc0, entries=154, sequenceid=1152, filesize=27.1 K 2023-07-22 18:11:41,735 INFO [M:0;jenkins-hbase4:40289] regionserver.HRegion(2948): Finished flush of dataSize ~519.22 KB/531685, heapSize ~621.35 KB/636264, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 37ms, sequenceid=1152, compaction requested=false 2023-07-22 18:11:41,738 INFO [M:0;jenkins-hbase4:40289] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:41,738 DEBUG [M:0;jenkins-hbase4:40289] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-22 18:11:41,743 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 18:11:41,743 INFO [M:0;jenkins-hbase4:40289] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-22 18:11:41,744 INFO [M:0;jenkins-hbase4:40289] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:40289 2023-07-22 18:11:41,745 DEBUG [M:0;jenkins-hbase4:40289] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,40289,1690049471773 already deleted, retry=false 2023-07-22 18:11:41,911 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:41,911 INFO [M:0;jenkins-hbase4:40289] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,40289,1690049471773; zookeeper connection closed. 2023-07-22 18:11:41,911 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): master:40289-0x1018e3ae4b00000, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:41,958 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 18:11:41,959 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-22 18:11:41,959 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-22 18:11:42,011 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:45471-0x1018e3ae4b0000b, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:42,011 INFO [RS:3;jenkins-hbase4:45471] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45471,1690049478954; zookeeper connection closed. 2023-07-22 18:11:42,011 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:45471-0x1018e3ae4b0000b, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:42,011 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@28dabc01] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@28dabc01 2023-07-22 18:11:42,111 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:38977-0x1018e3ae4b00002, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:42,111 INFO [RS:1;jenkins-hbase4:38977] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38977,1690049474061; zookeeper connection closed. 2023-07-22 18:11:42,111 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:38977-0x1018e3ae4b00002, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:42,112 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@604597cb] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@604597cb 2023-07-22 18:11:42,211 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:42,211 INFO [RS:2;jenkins-hbase4:38507] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38507,1690049474291; zookeeper connection closed. 2023-07-22 18:11:42,211 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): regionserver:38507-0x1018e3ae4b00003, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:42,212 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4829a2eb] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4829a2eb 2023-07-22 18:11:42,212 INFO [Listener at localhost/37829] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-22 18:11:42,212 WARN [Listener at localhost/37829] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-22 18:11:42,217 INFO [Listener at localhost/37829] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 18:11:42,321 WARN [BP-773543169-172.31.14.131-1690049468130 heartbeating to localhost/127.0.0.1:43335] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-22 18:11:42,321 WARN [BP-773543169-172.31.14.131-1690049468130 heartbeating to localhost/127.0.0.1:43335] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-773543169-172.31.14.131-1690049468130 (Datanode Uuid 6bb7a2b6-fd49-4764-9407-0ebf19b51997) service to localhost/127.0.0.1:43335 2023-07-22 18:11:42,323 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/cluster_3816f725-a9d6-dfeb-f749-0fa8036d8a95/dfs/data/data5/current/BP-773543169-172.31.14.131-1690049468130] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 18:11:42,323 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/cluster_3816f725-a9d6-dfeb-f749-0fa8036d8a95/dfs/data/data6/current/BP-773543169-172.31.14.131-1690049468130] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 18:11:42,325 WARN [Listener at localhost/37829] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-22 18:11:42,328 INFO [Listener at localhost/37829] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 18:11:42,432 WARN [BP-773543169-172.31.14.131-1690049468130 heartbeating to localhost/127.0.0.1:43335] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-22 18:11:42,432 WARN [BP-773543169-172.31.14.131-1690049468130 heartbeating to localhost/127.0.0.1:43335] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-773543169-172.31.14.131-1690049468130 (Datanode Uuid 299bddcc-4f07-44f7-9457-659925c68d26) service to localhost/127.0.0.1:43335 2023-07-22 18:11:42,433 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/cluster_3816f725-a9d6-dfeb-f749-0fa8036d8a95/dfs/data/data3/current/BP-773543169-172.31.14.131-1690049468130] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 18:11:42,433 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/cluster_3816f725-a9d6-dfeb-f749-0fa8036d8a95/dfs/data/data4/current/BP-773543169-172.31.14.131-1690049468130] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 18:11:42,434 WARN [Listener at localhost/37829] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-22 18:11:42,437 INFO [Listener at localhost/37829] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 18:11:42,540 WARN [BP-773543169-172.31.14.131-1690049468130 heartbeating to localhost/127.0.0.1:43335] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-22 18:11:42,540 WARN [BP-773543169-172.31.14.131-1690049468130 heartbeating to localhost/127.0.0.1:43335] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-773543169-172.31.14.131-1690049468130 (Datanode Uuid 84d6b383-b03f-4f0f-b53d-ec1b4939f810) service to localhost/127.0.0.1:43335 2023-07-22 18:11:42,540 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/cluster_3816f725-a9d6-dfeb-f749-0fa8036d8a95/dfs/data/data1/current/BP-773543169-172.31.14.131-1690049468130] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 18:11:42,541 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/cluster_3816f725-a9d6-dfeb-f749-0fa8036d8a95/dfs/data/data2/current/BP-773543169-172.31.14.131-1690049468130] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 18:11:42,574 INFO [Listener at localhost/37829] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 18:11:42,702 INFO [Listener at localhost/37829] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-22 18:11:42,755 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-22 18:11:42,755 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-22 18:11:42,755 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/hadoop.log.dir so I do NOT create it in target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78 2023-07-22 18:11:42,755 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/9353096c-f15c-281b-652a-de93c66d720e/hadoop.tmp.dir so I do NOT create it in target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78 2023-07-22 18:11:42,756 INFO [Listener at localhost/37829] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/cluster_8cebeb0e-c207-07ae-6c26-550c35d41020, deleteOnExit=true 2023-07-22 18:11:42,756 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-22 18:11:42,756 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/test.cache.data in system properties and HBase conf 2023-07-22 18:11:42,756 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/hadoop.tmp.dir in system properties and HBase conf 2023-07-22 18:11:42,756 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/hadoop.log.dir in system properties and HBase conf 2023-07-22 18:11:42,756 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-22 18:11:42,756 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-22 18:11:42,756 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-22 18:11:42,756 DEBUG [Listener at localhost/37829] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-22 18:11:42,757 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-22 18:11:42,757 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-22 18:11:42,757 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-22 18:11:42,757 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-22 18:11:42,757 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-22 18:11:42,757 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-22 18:11:42,757 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-22 18:11:42,757 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-22 18:11:42,757 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-22 18:11:42,757 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/nfs.dump.dir in system properties and HBase conf 2023-07-22 18:11:42,758 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/java.io.tmpdir in system properties and HBase conf 2023-07-22 18:11:42,758 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-22 18:11:42,758 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-22 18:11:42,758 INFO [Listener at localhost/37829] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-22 18:11:42,762 WARN [Listener at localhost/37829] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-22 18:11:42,762 WARN [Listener at localhost/37829] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-22 18:11:42,798 DEBUG [Listener at localhost/37829-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1018e3ae4b0000a, quorum=127.0.0.1:62144, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-22 18:11:42,799 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1018e3ae4b0000a, quorum=127.0.0.1:62144, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-22 18:11:42,806 WARN [Listener at localhost/37829] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 18:11:42,808 INFO [Listener at localhost/37829] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 18:11:42,812 INFO [Listener at localhost/37829] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/java.io.tmpdir/Jetty_localhost_35201_hdfs____.t8xah1/webapp 2023-07-22 18:11:42,905 INFO [Listener at localhost/37829] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35201 2023-07-22 18:11:42,910 WARN [Listener at localhost/37829] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-22 18:11:42,910 WARN [Listener at localhost/37829] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-22 18:11:42,958 WARN [Listener at localhost/43261] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 18:11:42,974 WARN [Listener at localhost/43261] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-22 18:11:43,024 WARN [Listener at localhost/43261] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-22 18:11:43,026 WARN [Listener at localhost/43261] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 18:11:43,028 INFO [Listener at localhost/43261] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 18:11:43,032 INFO [Listener at localhost/43261] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/java.io.tmpdir/Jetty_localhost_46851_datanode____.qs5r7r/webapp 2023-07-22 18:11:43,127 INFO [Listener at localhost/43261] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46851 2023-07-22 18:11:43,134 WARN [Listener at localhost/36711] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 18:11:43,154 WARN [Listener at localhost/36711] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-22 18:11:43,156 WARN [Listener at localhost/36711] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 18:11:43,158 INFO [Listener at localhost/36711] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 18:11:43,162 INFO [Listener at localhost/36711] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/java.io.tmpdir/Jetty_localhost_33459_datanode____.kbhmzl/webapp 2023-07-22 18:11:43,253 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7acc1d7843eaff7f: Processing first storage report for DS-833ee8fe-7fc5-41f5-bc4a-1fd65941ee7e from datanode ea0320f5-eaa6-4105-9fc1-afdcf98a7b06 2023-07-22 18:11:43,254 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7acc1d7843eaff7f: from storage DS-833ee8fe-7fc5-41f5-bc4a-1fd65941ee7e node DatanodeRegistration(127.0.0.1:44783, datanodeUuid=ea0320f5-eaa6-4105-9fc1-afdcf98a7b06, infoPort=39899, infoSecurePort=0, ipcPort=36711, storageInfo=lv=-57;cid=testClusterID;nsid=2130223437;c=1690049502765), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2023-07-22 18:11:43,254 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x7acc1d7843eaff7f: Processing first storage report for DS-291ede73-d090-486e-a45f-4bfedb0b6591 from datanode ea0320f5-eaa6-4105-9fc1-afdcf98a7b06 2023-07-22 18:11:43,254 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x7acc1d7843eaff7f: from storage DS-291ede73-d090-486e-a45f-4bfedb0b6591 node DatanodeRegistration(127.0.0.1:44783, datanodeUuid=ea0320f5-eaa6-4105-9fc1-afdcf98a7b06, infoPort=39899, infoSecurePort=0, ipcPort=36711, storageInfo=lv=-57;cid=testClusterID;nsid=2130223437;c=1690049502765), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 18:11:43,293 INFO [Listener at localhost/36711] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33459 2023-07-22 18:11:43,304 WARN [Listener at localhost/43289] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 18:11:43,323 WARN [Listener at localhost/43289] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-22 18:11:43,326 WARN [Listener at localhost/43289] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 18:11:43,327 INFO [Listener at localhost/43289] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 18:11:43,335 INFO [Listener at localhost/43289] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/java.io.tmpdir/Jetty_localhost_42961_datanode____nk8ifx/webapp 2023-07-22 18:11:43,428 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x59d057494082ae93: Processing first storage report for DS-8188052e-9db2-4ed8-909a-137120570805 from datanode 855b84be-30cd-4b88-9095-222559fb9481 2023-07-22 18:11:43,428 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x59d057494082ae93: from storage DS-8188052e-9db2-4ed8-909a-137120570805 node DatanodeRegistration(127.0.0.1:38381, datanodeUuid=855b84be-30cd-4b88-9095-222559fb9481, infoPort=41697, infoSecurePort=0, ipcPort=43289, storageInfo=lv=-57;cid=testClusterID;nsid=2130223437;c=1690049502765), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 18:11:43,428 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x59d057494082ae93: Processing first storage report for DS-afa6f315-3bf5-4c8a-a421-72b4205faabd from datanode 855b84be-30cd-4b88-9095-222559fb9481 2023-07-22 18:11:43,428 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x59d057494082ae93: from storage DS-afa6f315-3bf5-4c8a-a421-72b4205faabd node DatanodeRegistration(127.0.0.1:38381, datanodeUuid=855b84be-30cd-4b88-9095-222559fb9481, infoPort=41697, infoSecurePort=0, ipcPort=43289, storageInfo=lv=-57;cid=testClusterID;nsid=2130223437;c=1690049502765), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 18:11:43,444 INFO [Listener at localhost/43289] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:42961 2023-07-22 18:11:43,455 WARN [Listener at localhost/38083] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 18:11:43,572 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf1e61d32c5f67549: Processing first storage report for DS-fbe6318b-a30c-4774-ad83-89e98d0c7af9 from datanode c1184ebc-8272-4308-b990-3d2cd1f0f589 2023-07-22 18:11:43,573 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf1e61d32c5f67549: from storage DS-fbe6318b-a30c-4774-ad83-89e98d0c7af9 node DatanodeRegistration(127.0.0.1:46689, datanodeUuid=c1184ebc-8272-4308-b990-3d2cd1f0f589, infoPort=42341, infoSecurePort=0, ipcPort=38083, storageInfo=lv=-57;cid=testClusterID;nsid=2130223437;c=1690049502765), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 18:11:43,573 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xf1e61d32c5f67549: Processing first storage report for DS-b8e2e449-9295-49ef-8a86-713a23c5679c from datanode c1184ebc-8272-4308-b990-3d2cd1f0f589 2023-07-22 18:11:43,573 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xf1e61d32c5f67549: from storage DS-b8e2e449-9295-49ef-8a86-713a23c5679c node DatanodeRegistration(127.0.0.1:46689, datanodeUuid=c1184ebc-8272-4308-b990-3d2cd1f0f589, infoPort=42341, infoSecurePort=0, ipcPort=38083, storageInfo=lv=-57;cid=testClusterID;nsid=2130223437;c=1690049502765), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 18:11:43,575 DEBUG [Listener at localhost/38083] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78 2023-07-22 18:11:43,583 INFO [Listener at localhost/38083] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/cluster_8cebeb0e-c207-07ae-6c26-550c35d41020/zookeeper_0, clientPort=56348, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/cluster_8cebeb0e-c207-07ae-6c26-550c35d41020/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/cluster_8cebeb0e-c207-07ae-6c26-550c35d41020/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-22 18:11:43,590 INFO [Listener at localhost/38083] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=56348 2023-07-22 18:11:43,591 INFO [Listener at localhost/38083] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:43,592 INFO [Listener at localhost/38083] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:43,615 INFO [Listener at localhost/38083] util.FSUtils(471): Created version file at hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe with version=8 2023-07-22 18:11:43,616 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/hbase-staging 2023-07-22 18:11:43,617 DEBUG [Listener at localhost/38083] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-22 18:11:43,617 DEBUG [Listener at localhost/38083] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-22 18:11:43,617 DEBUG [Listener at localhost/38083] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-22 18:11:43,617 DEBUG [Listener at localhost/38083] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-22 18:11:43,618 INFO [Listener at localhost/38083] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 18:11:43,618 INFO [Listener at localhost/38083] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:43,618 INFO [Listener at localhost/38083] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:43,618 INFO [Listener at localhost/38083] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 18:11:43,618 INFO [Listener at localhost/38083] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:43,618 INFO [Listener at localhost/38083] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 18:11:43,619 INFO [Listener at localhost/38083] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 18:11:43,619 INFO [Listener at localhost/38083] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33193 2023-07-22 18:11:43,620 INFO [Listener at localhost/38083] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:43,621 INFO [Listener at localhost/38083] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:43,622 INFO [Listener at localhost/38083] zookeeper.RecoverableZooKeeper(93): Process identifier=master:33193 connecting to ZooKeeper ensemble=127.0.0.1:56348 2023-07-22 18:11:43,629 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:331930x0, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 18:11:43,630 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:33193-0x1018e3b64600000 connected 2023-07-22 18:11:43,653 DEBUG [Listener at localhost/38083] zookeeper.ZKUtil(164): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 18:11:43,653 DEBUG [Listener at localhost/38083] zookeeper.ZKUtil(164): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:43,653 DEBUG [Listener at localhost/38083] zookeeper.ZKUtil(164): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 18:11:43,658 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33193 2023-07-22 18:11:43,659 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33193 2023-07-22 18:11:43,661 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33193 2023-07-22 18:11:43,662 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33193 2023-07-22 18:11:43,662 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33193 2023-07-22 18:11:43,664 INFO [Listener at localhost/38083] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 18:11:43,665 INFO [Listener at localhost/38083] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 18:11:43,665 INFO [Listener at localhost/38083] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 18:11:43,665 INFO [Listener at localhost/38083] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-22 18:11:43,665 INFO [Listener at localhost/38083] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 18:11:43,665 INFO [Listener at localhost/38083] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 18:11:43,665 INFO [Listener at localhost/38083] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 18:11:43,666 INFO [Listener at localhost/38083] http.HttpServer(1146): Jetty bound to port 37705 2023-07-22 18:11:43,666 INFO [Listener at localhost/38083] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 18:11:43,671 INFO [Listener at localhost/38083] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:43,672 INFO [Listener at localhost/38083] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6cc3484e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/hadoop.log.dir/,AVAILABLE} 2023-07-22 18:11:43,672 INFO [Listener at localhost/38083] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:43,672 INFO [Listener at localhost/38083] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@46d38481{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 18:11:43,792 INFO [Listener at localhost/38083] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 18:11:43,793 INFO [Listener at localhost/38083] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 18:11:43,794 INFO [Listener at localhost/38083] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 18:11:43,794 INFO [Listener at localhost/38083] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-22 18:11:43,795 INFO [Listener at localhost/38083] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:43,796 INFO [Listener at localhost/38083] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4cf8ce7b{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/java.io.tmpdir/jetty-0_0_0_0-37705-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2894760796853212803/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-22 18:11:43,798 INFO [Listener at localhost/38083] server.AbstractConnector(333): Started ServerConnector@15f599c4{HTTP/1.1, (http/1.1)}{0.0.0.0:37705} 2023-07-22 18:11:43,798 INFO [Listener at localhost/38083] server.Server(415): Started @37750ms 2023-07-22 18:11:43,798 INFO [Listener at localhost/38083] master.HMaster(444): hbase.rootdir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe, hbase.cluster.distributed=false 2023-07-22 18:11:43,819 INFO [Listener at localhost/38083] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 18:11:43,819 INFO [Listener at localhost/38083] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:43,819 INFO [Listener at localhost/38083] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:43,819 INFO [Listener at localhost/38083] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 18:11:43,820 INFO [Listener at localhost/38083] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:43,820 INFO [Listener at localhost/38083] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 18:11:43,820 INFO [Listener at localhost/38083] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 18:11:43,821 INFO [Listener at localhost/38083] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39145 2023-07-22 18:11:43,821 INFO [Listener at localhost/38083] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 18:11:43,822 DEBUG [Listener at localhost/38083] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 18:11:43,823 INFO [Listener at localhost/38083] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:43,825 INFO [Listener at localhost/38083] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:43,826 INFO [Listener at localhost/38083] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39145 connecting to ZooKeeper ensemble=127.0.0.1:56348 2023-07-22 18:11:43,831 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:391450x0, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 18:11:43,832 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39145-0x1018e3b64600001 connected 2023-07-22 18:11:43,832 DEBUG [Listener at localhost/38083] zookeeper.ZKUtil(164): regionserver:39145-0x1018e3b64600001, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 18:11:43,833 DEBUG [Listener at localhost/38083] zookeeper.ZKUtil(164): regionserver:39145-0x1018e3b64600001, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:43,833 DEBUG [Listener at localhost/38083] zookeeper.ZKUtil(164): regionserver:39145-0x1018e3b64600001, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 18:11:43,834 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39145 2023-07-22 18:11:43,835 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39145 2023-07-22 18:11:43,840 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39145 2023-07-22 18:11:43,840 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39145 2023-07-22 18:11:43,840 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39145 2023-07-22 18:11:43,842 INFO [Listener at localhost/38083] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 18:11:43,843 INFO [Listener at localhost/38083] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 18:11:43,843 INFO [Listener at localhost/38083] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 18:11:43,843 INFO [Listener at localhost/38083] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 18:11:43,843 INFO [Listener at localhost/38083] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 18:11:43,843 INFO [Listener at localhost/38083] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 18:11:43,844 INFO [Listener at localhost/38083] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 18:11:43,845 INFO [Listener at localhost/38083] http.HttpServer(1146): Jetty bound to port 42701 2023-07-22 18:11:43,845 INFO [Listener at localhost/38083] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 18:11:43,847 INFO [Listener at localhost/38083] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:43,848 INFO [Listener at localhost/38083] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@68afaa1c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/hadoop.log.dir/,AVAILABLE} 2023-07-22 18:11:43,848 INFO [Listener at localhost/38083] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:43,848 INFO [Listener at localhost/38083] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@67ef1e6a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 18:11:43,984 INFO [Listener at localhost/38083] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 18:11:43,985 INFO [Listener at localhost/38083] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 18:11:43,985 INFO [Listener at localhost/38083] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 18:11:43,985 INFO [Listener at localhost/38083] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-22 18:11:43,986 INFO [Listener at localhost/38083] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:43,987 INFO [Listener at localhost/38083] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5230b41e{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/java.io.tmpdir/jetty-0_0_0_0-42701-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5858070053964057358/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:43,988 INFO [Listener at localhost/38083] server.AbstractConnector(333): Started ServerConnector@6d4587e2{HTTP/1.1, (http/1.1)}{0.0.0.0:42701} 2023-07-22 18:11:43,988 INFO [Listener at localhost/38083] server.Server(415): Started @37941ms 2023-07-22 18:11:44,000 INFO [Listener at localhost/38083] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 18:11:44,001 INFO [Listener at localhost/38083] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:44,001 INFO [Listener at localhost/38083] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:44,001 INFO [Listener at localhost/38083] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 18:11:44,001 INFO [Listener at localhost/38083] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:44,001 INFO [Listener at localhost/38083] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 18:11:44,001 INFO [Listener at localhost/38083] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 18:11:44,002 INFO [Listener at localhost/38083] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39103 2023-07-22 18:11:44,002 INFO [Listener at localhost/38083] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 18:11:44,004 DEBUG [Listener at localhost/38083] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 18:11:44,005 INFO [Listener at localhost/38083] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:44,007 INFO [Listener at localhost/38083] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:44,008 INFO [Listener at localhost/38083] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39103 connecting to ZooKeeper ensemble=127.0.0.1:56348 2023-07-22 18:11:44,012 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:391030x0, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 18:11:44,014 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39103-0x1018e3b64600002 connected 2023-07-22 18:11:44,014 DEBUG [Listener at localhost/38083] zookeeper.ZKUtil(164): regionserver:39103-0x1018e3b64600002, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 18:11:44,015 DEBUG [Listener at localhost/38083] zookeeper.ZKUtil(164): regionserver:39103-0x1018e3b64600002, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:44,015 DEBUG [Listener at localhost/38083] zookeeper.ZKUtil(164): regionserver:39103-0x1018e3b64600002, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 18:11:44,016 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39103 2023-07-22 18:11:44,016 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39103 2023-07-22 18:11:44,016 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39103 2023-07-22 18:11:44,017 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39103 2023-07-22 18:11:44,017 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39103 2023-07-22 18:11:44,019 INFO [Listener at localhost/38083] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 18:11:44,019 INFO [Listener at localhost/38083] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 18:11:44,019 INFO [Listener at localhost/38083] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 18:11:44,020 INFO [Listener at localhost/38083] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 18:11:44,020 INFO [Listener at localhost/38083] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 18:11:44,020 INFO [Listener at localhost/38083] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 18:11:44,020 INFO [Listener at localhost/38083] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 18:11:44,021 INFO [Listener at localhost/38083] http.HttpServer(1146): Jetty bound to port 38919 2023-07-22 18:11:44,021 INFO [Listener at localhost/38083] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 18:11:44,023 INFO [Listener at localhost/38083] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:44,023 INFO [Listener at localhost/38083] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@575948b0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/hadoop.log.dir/,AVAILABLE} 2023-07-22 18:11:44,024 INFO [Listener at localhost/38083] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:44,024 INFO [Listener at localhost/38083] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7622fabc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 18:11:44,143 INFO [Listener at localhost/38083] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 18:11:44,144 INFO [Listener at localhost/38083] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 18:11:44,145 INFO [Listener at localhost/38083] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 18:11:44,145 INFO [Listener at localhost/38083] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-22 18:11:44,146 INFO [Listener at localhost/38083] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:44,146 INFO [Listener at localhost/38083] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7aa65829{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/java.io.tmpdir/jetty-0_0_0_0-38919-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3371798925393573248/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:44,148 INFO [Listener at localhost/38083] server.AbstractConnector(333): Started ServerConnector@4fbf3e9c{HTTP/1.1, (http/1.1)}{0.0.0.0:38919} 2023-07-22 18:11:44,148 INFO [Listener at localhost/38083] server.Server(415): Started @38100ms 2023-07-22 18:11:44,160 INFO [Listener at localhost/38083] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 18:11:44,160 INFO [Listener at localhost/38083] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:44,160 INFO [Listener at localhost/38083] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:44,160 INFO [Listener at localhost/38083] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 18:11:44,160 INFO [Listener at localhost/38083] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:44,160 INFO [Listener at localhost/38083] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 18:11:44,160 INFO [Listener at localhost/38083] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 18:11:44,161 INFO [Listener at localhost/38083] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37903 2023-07-22 18:11:44,162 INFO [Listener at localhost/38083] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 18:11:44,163 DEBUG [Listener at localhost/38083] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 18:11:44,163 INFO [Listener at localhost/38083] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:44,164 INFO [Listener at localhost/38083] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:44,165 INFO [Listener at localhost/38083] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37903 connecting to ZooKeeper ensemble=127.0.0.1:56348 2023-07-22 18:11:44,168 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:379030x0, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 18:11:44,169 DEBUG [Listener at localhost/38083] zookeeper.ZKUtil(164): regionserver:379030x0, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 18:11:44,170 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37903-0x1018e3b64600003 connected 2023-07-22 18:11:44,170 DEBUG [Listener at localhost/38083] zookeeper.ZKUtil(164): regionserver:37903-0x1018e3b64600003, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:44,171 DEBUG [Listener at localhost/38083] zookeeper.ZKUtil(164): regionserver:37903-0x1018e3b64600003, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 18:11:44,171 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37903 2023-07-22 18:11:44,171 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37903 2023-07-22 18:11:44,173 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37903 2023-07-22 18:11:44,174 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37903 2023-07-22 18:11:44,174 DEBUG [Listener at localhost/38083] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37903 2023-07-22 18:11:44,176 INFO [Listener at localhost/38083] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 18:11:44,176 INFO [Listener at localhost/38083] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 18:11:44,176 INFO [Listener at localhost/38083] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 18:11:44,176 INFO [Listener at localhost/38083] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 18:11:44,177 INFO [Listener at localhost/38083] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 18:11:44,177 INFO [Listener at localhost/38083] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 18:11:44,177 INFO [Listener at localhost/38083] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 18:11:44,177 INFO [Listener at localhost/38083] http.HttpServer(1146): Jetty bound to port 40415 2023-07-22 18:11:44,177 INFO [Listener at localhost/38083] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 18:11:44,179 INFO [Listener at localhost/38083] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:44,180 INFO [Listener at localhost/38083] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@521b5309{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/hadoop.log.dir/,AVAILABLE} 2023-07-22 18:11:44,180 INFO [Listener at localhost/38083] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:44,180 INFO [Listener at localhost/38083] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2768d20d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 18:11:44,296 INFO [Listener at localhost/38083] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 18:11:44,297 INFO [Listener at localhost/38083] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 18:11:44,297 INFO [Listener at localhost/38083] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 18:11:44,298 INFO [Listener at localhost/38083] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-22 18:11:44,299 INFO [Listener at localhost/38083] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:44,300 INFO [Listener at localhost/38083] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5c04004{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/java.io.tmpdir/jetty-0_0_0_0-40415-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6581579263346131228/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:44,301 INFO [Listener at localhost/38083] server.AbstractConnector(333): Started ServerConnector@53586b{HTTP/1.1, (http/1.1)}{0.0.0.0:40415} 2023-07-22 18:11:44,302 INFO [Listener at localhost/38083] server.Server(415): Started @38254ms 2023-07-22 18:11:44,304 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 18:11:44,308 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@3878e383{HTTP/1.1, (http/1.1)}{0.0.0.0:41479} 2023-07-22 18:11:44,308 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @38260ms 2023-07-22 18:11:44,308 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,33193,1690049503617 2023-07-22 18:11:44,309 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-22 18:11:44,310 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,33193,1690049503617 2023-07-22 18:11:44,312 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:39145-0x1018e3b64600001, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 18:11:44,312 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:39103-0x1018e3b64600002, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 18:11:44,312 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 18:11:44,312 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:37903-0x1018e3b64600003, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 18:11:44,313 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:44,313 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-22 18:11:44,315 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,33193,1690049503617 from backup master directory 2023-07-22 18:11:44,315 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-22 18:11:44,316 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,33193,1690049503617 2023-07-22 18:11:44,316 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 18:11:44,316 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-22 18:11:44,316 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,33193,1690049503617 2023-07-22 18:11:44,342 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/hbase.id with ID: 5b80bce5-e55d-4296-84e7-1eff82617e94 2023-07-22 18:11:44,355 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:44,358 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:44,371 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x570c6203 to 127.0.0.1:56348 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:44,375 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@186eae47, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:44,375 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:44,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-22 18:11:44,376 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 18:11:44,377 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/MasterData/data/master/store-tmp 2023-07-22 18:11:44,389 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:44,389 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-22 18:11:44,389 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:44,389 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:44,389 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-22 18:11:44,389 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:44,389 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:44,389 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-22 18:11:44,390 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/MasterData/WALs/jenkins-hbase4.apache.org,33193,1690049503617 2023-07-22 18:11:44,392 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33193%2C1690049503617, suffix=, logDir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/MasterData/WALs/jenkins-hbase4.apache.org,33193,1690049503617, archiveDir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/MasterData/oldWALs, maxLogs=10 2023-07-22 18:11:44,408 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44783,DS-833ee8fe-7fc5-41f5-bc4a-1fd65941ee7e,DISK] 2023-07-22 18:11:44,409 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38381,DS-8188052e-9db2-4ed8-909a-137120570805,DISK] 2023-07-22 18:11:44,409 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46689,DS-fbe6318b-a30c-4774-ad83-89e98d0c7af9,DISK] 2023-07-22 18:11:44,412 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/MasterData/WALs/jenkins-hbase4.apache.org,33193,1690049503617/jenkins-hbase4.apache.org%2C33193%2C1690049503617.1690049504393 2023-07-22 18:11:44,412 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44783,DS-833ee8fe-7fc5-41f5-bc4a-1fd65941ee7e,DISK], DatanodeInfoWithStorage[127.0.0.1:38381,DS-8188052e-9db2-4ed8-909a-137120570805,DISK], DatanodeInfoWithStorage[127.0.0.1:46689,DS-fbe6318b-a30c-4774-ad83-89e98d0c7af9,DISK]] 2023-07-22 18:11:44,412 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:44,412 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:44,412 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-22 18:11:44,412 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-22 18:11:44,414 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-22 18:11:44,416 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-22 18:11:44,416 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-22 18:11:44,417 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:44,417 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-22 18:11:44,418 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-22 18:11:44,421 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-22 18:11:44,423 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:44,424 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11863985760, jitterRate=0.10491977632045746}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:44,424 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-22 18:11:44,424 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-22 18:11:44,425 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-22 18:11:44,425 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-22 18:11:44,425 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-22 18:11:44,426 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-22 18:11:44,426 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-22 18:11:44,426 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-22 18:11:44,431 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-22 18:11:44,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-22 18:11:44,432 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-22 18:11:44,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-22 18:11:44,433 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-22 18:11:44,436 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:44,436 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-22 18:11:44,437 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-22 18:11:44,440 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-22 18:11:44,441 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:39145-0x1018e3b64600001, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:44,441 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:44,441 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:39103-0x1018e3b64600002, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:44,441 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:37903-0x1018e3b64600003, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:44,441 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:44,441 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,33193,1690049503617, sessionid=0x1018e3b64600000, setting cluster-up flag (Was=false) 2023-07-22 18:11:44,446 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:44,450 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-22 18:11:44,451 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33193,1690049503617 2023-07-22 18:11:44,454 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:44,457 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-22 18:11:44,458 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,33193,1690049503617 2023-07-22 18:11:44,459 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.hbase-snapshot/.tmp 2023-07-22 18:11:44,464 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-22 18:11:44,464 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-22 18:11:44,465 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-22 18:11:44,465 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33193,1690049503617] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 18:11:44,466 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-22 18:11:44,466 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-22 18:11:44,467 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-22 18:11:44,482 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-22 18:11:44,482 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-22 18:11:44,482 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-22 18:11:44,482 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-22 18:11:44,482 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 18:11:44,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 18:11:44,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 18:11:44,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 18:11:44,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-22 18:11:44,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 18:11:44,483 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,485 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690049534485 2023-07-22 18:11:44,485 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-22 18:11:44,485 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-22 18:11:44,485 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-22 18:11:44,485 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-22 18:11:44,485 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-22 18:11:44,485 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-22 18:11:44,485 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,485 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-22 18:11:44,485 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-22 18:11:44,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-22 18:11:44,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-22 18:11:44,486 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-22 18:11:44,487 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-22 18:11:44,487 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-22 18:11:44,487 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690049504487,5,FailOnTimeoutGroup] 2023-07-22 18:11:44,487 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:44,495 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690049504487,5,FailOnTimeoutGroup] 2023-07-22 18:11:44,495 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,495 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-22 18:11:44,495 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,495 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,504 INFO [RS:0;jenkins-hbase4:39145] regionserver.HRegionServer(951): ClusterId : 5b80bce5-e55d-4296-84e7-1eff82617e94 2023-07-22 18:11:44,504 INFO [RS:1;jenkins-hbase4:39103] regionserver.HRegionServer(951): ClusterId : 5b80bce5-e55d-4296-84e7-1eff82617e94 2023-07-22 18:11:44,504 INFO [RS:2;jenkins-hbase4:37903] regionserver.HRegionServer(951): ClusterId : 5b80bce5-e55d-4296-84e7-1eff82617e94 2023-07-22 18:11:44,508 DEBUG [RS:0;jenkins-hbase4:39145] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 18:11:44,508 DEBUG [RS:2;jenkins-hbase4:37903] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 18:11:44,508 DEBUG [RS:1;jenkins-hbase4:39103] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 18:11:44,512 DEBUG [RS:0;jenkins-hbase4:39145] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 18:11:44,512 DEBUG [RS:0;jenkins-hbase4:39145] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 18:11:44,512 DEBUG [RS:2;jenkins-hbase4:37903] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 18:11:44,512 DEBUG [RS:1;jenkins-hbase4:39103] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 18:11:44,513 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:44,513 DEBUG [RS:2;jenkins-hbase4:37903] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 18:11:44,513 DEBUG [RS:1;jenkins-hbase4:39103] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 18:11:44,513 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:44,513 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe 2023-07-22 18:11:44,515 DEBUG [RS:1;jenkins-hbase4:39103] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 18:11:44,515 DEBUG [RS:0;jenkins-hbase4:39145] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 18:11:44,516 DEBUG [RS:2;jenkins-hbase4:37903] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 18:11:44,518 DEBUG [RS:2;jenkins-hbase4:37903] zookeeper.ReadOnlyZKClient(139): Connect 0x6061f652 to 127.0.0.1:56348 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:44,518 DEBUG [RS:0;jenkins-hbase4:39145] zookeeper.ReadOnlyZKClient(139): Connect 0x54482ba2 to 127.0.0.1:56348 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:44,518 DEBUG [RS:1;jenkins-hbase4:39103] zookeeper.ReadOnlyZKClient(139): Connect 0x57fcee06 to 127.0.0.1:56348 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:44,538 DEBUG [RS:2;jenkins-hbase4:37903] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6ea6042e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:44,539 DEBUG [RS:0;jenkins-hbase4:39145] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5a8fcf5b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:44,539 DEBUG [RS:2;jenkins-hbase4:37903] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2f9b0e15, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 18:11:44,539 DEBUG [RS:0;jenkins-hbase4:39145] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1bb2c560, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 18:11:44,539 DEBUG [RS:1;jenkins-hbase4:39103] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5d048576, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:44,540 DEBUG [RS:1;jenkins-hbase4:39103] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@180600b2, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 18:11:44,547 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:44,551 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-22 18:11:44,552 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/info 2023-07-22 18:11:44,553 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-22 18:11:44,553 DEBUG [RS:1;jenkins-hbase4:39103] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:39103 2023-07-22 18:11:44,553 INFO [RS:1;jenkins-hbase4:39103] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 18:11:44,553 INFO [RS:1;jenkins-hbase4:39103] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 18:11:44,553 DEBUG [RS:1;jenkins-hbase4:39103] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 18:11:44,553 DEBUG [RS:0;jenkins-hbase4:39145] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:39145 2023-07-22 18:11:44,553 DEBUG [RS:2;jenkins-hbase4:37903] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:37903 2023-07-22 18:11:44,554 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:44,554 INFO [RS:0;jenkins-hbase4:39145] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 18:11:44,554 INFO [RS:0;jenkins-hbase4:39145] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 18:11:44,554 INFO [RS:2;jenkins-hbase4:37903] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 18:11:44,554 INFO [RS:2;jenkins-hbase4:37903] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 18:11:44,554 INFO [RS:1;jenkins-hbase4:39103] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33193,1690049503617 with isa=jenkins-hbase4.apache.org/172.31.14.131:39103, startcode=1690049504000 2023-07-22 18:11:44,554 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-22 18:11:44,554 DEBUG [RS:0;jenkins-hbase4:39145] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 18:11:44,554 DEBUG [RS:2;jenkins-hbase4:37903] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 18:11:44,554 DEBUG [RS:1;jenkins-hbase4:39103] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 18:11:44,554 INFO [RS:2;jenkins-hbase4:37903] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33193,1690049503617 with isa=jenkins-hbase4.apache.org/172.31.14.131:37903, startcode=1690049504159 2023-07-22 18:11:44,554 INFO [RS:0;jenkins-hbase4:39145] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,33193,1690049503617 with isa=jenkins-hbase4.apache.org/172.31.14.131:39145, startcode=1690049503818 2023-07-22 18:11:44,555 DEBUG [RS:2;jenkins-hbase4:37903] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 18:11:44,555 DEBUG [RS:0;jenkins-hbase4:39145] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 18:11:44,556 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/rep_barrier 2023-07-22 18:11:44,556 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-22 18:11:44,556 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:41371, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 18:11:44,557 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:44,557 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-22 18:11:44,558 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/table 2023-07-22 18:11:44,558 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-22 18:11:44,560 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33193] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39103,1690049504000 2023-07-22 18:11:44,560 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33193,1690049503617] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 18:11:44,561 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33193,1690049503617] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-22 18:11:44,561 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58553, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 18:11:44,561 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35477, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 18:11:44,561 DEBUG [RS:1;jenkins-hbase4:39103] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe 2023-07-22 18:11:44,561 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33193] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39145,1690049503818 2023-07-22 18:11:44,561 DEBUG [RS:1;jenkins-hbase4:39103] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43261 2023-07-22 18:11:44,561 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33193,1690049503617] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 18:11:44,561 DEBUG [RS:1;jenkins-hbase4:39103] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37705 2023-07-22 18:11:44,562 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33193,1690049503617] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-22 18:11:44,562 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33193] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37903,1690049504159 2023-07-22 18:11:44,562 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33193,1690049503617] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 18:11:44,562 DEBUG [RS:0;jenkins-hbase4:39145] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe 2023-07-22 18:11:44,562 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33193,1690049503617] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-22 18:11:44,562 DEBUG [RS:0;jenkins-hbase4:39145] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43261 2023-07-22 18:11:44,562 DEBUG [RS:0;jenkins-hbase4:39145] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37705 2023-07-22 18:11:44,562 DEBUG [RS:2;jenkins-hbase4:37903] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe 2023-07-22 18:11:44,562 DEBUG [RS:2;jenkins-hbase4:37903] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:43261 2023-07-22 18:11:44,562 DEBUG [RS:2;jenkins-hbase4:37903] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=37705 2023-07-22 18:11:44,563 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:44,565 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:44,565 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740 2023-07-22 18:11:44,566 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740 2023-07-22 18:11:44,568 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-22 18:11:44,569 DEBUG [RS:1;jenkins-hbase4:39103] zookeeper.ZKUtil(162): regionserver:39103-0x1018e3b64600002, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39103,1690049504000 2023-07-22 18:11:44,569 WARN [RS:1;jenkins-hbase4:39103] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 18:11:44,569 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39103,1690049504000] 2023-07-22 18:11:44,569 INFO [RS:1;jenkins-hbase4:39103] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 18:11:44,569 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37903,1690049504159] 2023-07-22 18:11:44,569 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39145,1690049503818] 2023-07-22 18:11:44,569 DEBUG [RS:0;jenkins-hbase4:39145] zookeeper.ZKUtil(162): regionserver:39145-0x1018e3b64600001, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39145,1690049503818 2023-07-22 18:11:44,569 DEBUG [RS:2;jenkins-hbase4:37903] zookeeper.ZKUtil(162): regionserver:37903-0x1018e3b64600003, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37903,1690049504159 2023-07-22 18:11:44,569 WARN [RS:0;jenkins-hbase4:39145] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 18:11:44,569 DEBUG [RS:1;jenkins-hbase4:39103] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/WALs/jenkins-hbase4.apache.org,39103,1690049504000 2023-07-22 18:11:44,570 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-22 18:11:44,570 INFO [RS:0;jenkins-hbase4:39145] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 18:11:44,569 WARN [RS:2;jenkins-hbase4:37903] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 18:11:44,570 DEBUG [RS:0;jenkins-hbase4:39145] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/WALs/jenkins-hbase4.apache.org,39145,1690049503818 2023-07-22 18:11:44,570 INFO [RS:2;jenkins-hbase4:37903] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 18:11:44,570 DEBUG [RS:2;jenkins-hbase4:37903] regionserver.HRegionServer(1948): logDir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/WALs/jenkins-hbase4.apache.org,37903,1690049504159 2023-07-22 18:11:44,580 DEBUG [RS:1;jenkins-hbase4:39103] zookeeper.ZKUtil(162): regionserver:39103-0x1018e3b64600002, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37903,1690049504159 2023-07-22 18:11:44,580 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:44,580 DEBUG [RS:1;jenkins-hbase4:39103] zookeeper.ZKUtil(162): regionserver:39103-0x1018e3b64600002, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39145,1690049503818 2023-07-22 18:11:44,581 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10549462560, jitterRate=-0.0175047367811203}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-22 18:11:44,581 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-22 18:11:44,581 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-22 18:11:44,581 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-22 18:11:44,581 DEBUG [RS:1;jenkins-hbase4:39103] zookeeper.ZKUtil(162): regionserver:39103-0x1018e3b64600002, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39103,1690049504000 2023-07-22 18:11:44,581 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-22 18:11:44,581 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-22 18:11:44,581 DEBUG [RS:0;jenkins-hbase4:39145] zookeeper.ZKUtil(162): regionserver:39145-0x1018e3b64600001, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37903,1690049504159 2023-07-22 18:11:44,581 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-22 18:11:44,582 DEBUG [RS:0;jenkins-hbase4:39145] zookeeper.ZKUtil(162): regionserver:39145-0x1018e3b64600001, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39145,1690049503818 2023-07-22 18:11:44,582 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-22 18:11:44,582 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-22 18:11:44,582 DEBUG [RS:1;jenkins-hbase4:39103] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 18:11:44,582 DEBUG [RS:0;jenkins-hbase4:39145] zookeeper.ZKUtil(162): regionserver:39145-0x1018e3b64600001, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39103,1690049504000 2023-07-22 18:11:44,582 INFO [RS:1;jenkins-hbase4:39103] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 18:11:44,582 DEBUG [RS:2;jenkins-hbase4:37903] zookeeper.ZKUtil(162): regionserver:37903-0x1018e3b64600003, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37903,1690049504159 2023-07-22 18:11:44,583 DEBUG [RS:2;jenkins-hbase4:37903] zookeeper.ZKUtil(162): regionserver:37903-0x1018e3b64600003, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39145,1690049503818 2023-07-22 18:11:44,583 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-22 18:11:44,583 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-22 18:11:44,583 DEBUG [RS:2;jenkins-hbase4:37903] zookeeper.ZKUtil(162): regionserver:37903-0x1018e3b64600003, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39103,1690049504000 2023-07-22 18:11:44,583 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-22 18:11:44,584 DEBUG [RS:0;jenkins-hbase4:39145] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 18:11:44,584 INFO [RS:0;jenkins-hbase4:39145] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 18:11:44,584 DEBUG [RS:2;jenkins-hbase4:37903] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 18:11:44,585 INFO [RS:2;jenkins-hbase4:37903] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 18:11:44,585 INFO [RS:1;jenkins-hbase4:39103] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 18:11:44,586 INFO [RS:0;jenkins-hbase4:39145] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 18:11:44,587 INFO [RS:1;jenkins-hbase4:39103] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 18:11:44,587 INFO [RS:1;jenkins-hbase4:39103] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,587 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-22 18:11:44,588 INFO [RS:0;jenkins-hbase4:39145] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 18:11:44,594 INFO [RS:2;jenkins-hbase4:37903] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 18:11:44,594 INFO [RS:0;jenkins-hbase4:39145] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,588 INFO [RS:1;jenkins-hbase4:39103] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 18:11:44,595 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-22 18:11:44,595 INFO [RS:0;jenkins-hbase4:39145] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 18:11:44,595 INFO [RS:2;jenkins-hbase4:37903] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 18:11:44,596 INFO [RS:2;jenkins-hbase4:37903] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,597 INFO [RS:2;jenkins-hbase4:37903] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 18:11:44,597 INFO [RS:0;jenkins-hbase4:39145] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,597 INFO [RS:1;jenkins-hbase4:39103] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,598 DEBUG [RS:0;jenkins-hbase4:39145] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,598 DEBUG [RS:0;jenkins-hbase4:39145] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,598 DEBUG [RS:1;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,598 DEBUG [RS:0;jenkins-hbase4:39145] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,599 DEBUG [RS:1;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,599 DEBUG [RS:0;jenkins-hbase4:39145] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,599 DEBUG [RS:1;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,599 DEBUG [RS:0;jenkins-hbase4:39145] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,599 DEBUG [RS:1;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,599 DEBUG [RS:0;jenkins-hbase4:39145] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 18:11:44,599 DEBUG [RS:1;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,599 INFO [RS:2;jenkins-hbase4:37903] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,599 DEBUG [RS:1;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 18:11:44,599 DEBUG [RS:0;jenkins-hbase4:39145] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,599 DEBUG [RS:1;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,599 DEBUG [RS:0;jenkins-hbase4:39145] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,599 DEBUG [RS:2;jenkins-hbase4:37903] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,599 DEBUG [RS:0;jenkins-hbase4:39145] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,599 DEBUG [RS:2;jenkins-hbase4:37903] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,599 DEBUG [RS:1;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,599 DEBUG [RS:2;jenkins-hbase4:37903] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,599 DEBUG [RS:0;jenkins-hbase4:39145] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,599 DEBUG [RS:1;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,599 DEBUG [RS:2;jenkins-hbase4:37903] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,600 DEBUG [RS:1;jenkins-hbase4:39103] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,600 DEBUG [RS:2;jenkins-hbase4:37903] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,600 DEBUG [RS:2;jenkins-hbase4:37903] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 18:11:44,600 DEBUG [RS:2;jenkins-hbase4:37903] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,600 DEBUG [RS:2;jenkins-hbase4:37903] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,600 DEBUG [RS:2;jenkins-hbase4:37903] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,600 DEBUG [RS:2;jenkins-hbase4:37903] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:44,601 INFO [RS:0;jenkins-hbase4:39145] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,601 INFO [RS:0;jenkins-hbase4:39145] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,602 INFO [RS:2;jenkins-hbase4:37903] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,602 INFO [RS:0;jenkins-hbase4:39145] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,602 INFO [RS:2;jenkins-hbase4:37903] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,602 INFO [RS:1;jenkins-hbase4:39103] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,602 INFO [RS:2;jenkins-hbase4:37903] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,602 INFO [RS:0;jenkins-hbase4:39145] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,602 INFO [RS:2;jenkins-hbase4:37903] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,602 INFO [RS:1;jenkins-hbase4:39103] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,602 INFO [RS:1;jenkins-hbase4:39103] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,603 INFO [RS:1;jenkins-hbase4:39103] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,617 INFO [RS:2;jenkins-hbase4:37903] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 18:11:44,617 INFO [RS:2;jenkins-hbase4:37903] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37903,1690049504159-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,618 INFO [RS:1;jenkins-hbase4:39103] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 18:11:44,618 INFO [RS:1;jenkins-hbase4:39103] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39103,1690049504000-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,619 INFO [RS:0;jenkins-hbase4:39145] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 18:11:44,619 INFO [RS:0;jenkins-hbase4:39145] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39145,1690049503818-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,627 INFO [RS:2;jenkins-hbase4:37903] regionserver.Replication(203): jenkins-hbase4.apache.org,37903,1690049504159 started 2023-07-22 18:11:44,627 INFO [RS:2;jenkins-hbase4:37903] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37903,1690049504159, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37903, sessionid=0x1018e3b64600003 2023-07-22 18:11:44,627 DEBUG [RS:2;jenkins-hbase4:37903] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 18:11:44,627 DEBUG [RS:2;jenkins-hbase4:37903] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37903,1690049504159 2023-07-22 18:11:44,628 DEBUG [RS:2;jenkins-hbase4:37903] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37903,1690049504159' 2023-07-22 18:11:44,628 DEBUG [RS:2;jenkins-hbase4:37903] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 18:11:44,628 DEBUG [RS:2;jenkins-hbase4:37903] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 18:11:44,628 DEBUG [RS:2;jenkins-hbase4:37903] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 18:11:44,628 DEBUG [RS:2;jenkins-hbase4:37903] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 18:11:44,628 DEBUG [RS:2;jenkins-hbase4:37903] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37903,1690049504159 2023-07-22 18:11:44,628 DEBUG [RS:2;jenkins-hbase4:37903] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37903,1690049504159' 2023-07-22 18:11:44,628 DEBUG [RS:2;jenkins-hbase4:37903] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 18:11:44,629 DEBUG [RS:2;jenkins-hbase4:37903] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 18:11:44,629 DEBUG [RS:2;jenkins-hbase4:37903] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 18:11:44,629 INFO [RS:2;jenkins-hbase4:37903] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-22 18:11:44,630 INFO [RS:1;jenkins-hbase4:39103] regionserver.Replication(203): jenkins-hbase4.apache.org,39103,1690049504000 started 2023-07-22 18:11:44,630 INFO [RS:1;jenkins-hbase4:39103] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39103,1690049504000, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39103, sessionid=0x1018e3b64600002 2023-07-22 18:11:44,630 DEBUG [RS:1;jenkins-hbase4:39103] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 18:11:44,630 DEBUG [RS:1;jenkins-hbase4:39103] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39103,1690049504000 2023-07-22 18:11:44,630 DEBUG [RS:1;jenkins-hbase4:39103] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39103,1690049504000' 2023-07-22 18:11:44,630 DEBUG [RS:1;jenkins-hbase4:39103] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 18:11:44,630 DEBUG [RS:1;jenkins-hbase4:39103] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 18:11:44,631 DEBUG [RS:1;jenkins-hbase4:39103] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 18:11:44,631 DEBUG [RS:1;jenkins-hbase4:39103] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 18:11:44,631 DEBUG [RS:1;jenkins-hbase4:39103] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39103,1690049504000 2023-07-22 18:11:44,631 DEBUG [RS:1;jenkins-hbase4:39103] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39103,1690049504000' 2023-07-22 18:11:44,631 DEBUG [RS:1;jenkins-hbase4:39103] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 18:11:44,631 DEBUG [RS:1;jenkins-hbase4:39103] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 18:11:44,631 INFO [RS:2;jenkins-hbase4:37903] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,631 DEBUG [RS:1;jenkins-hbase4:39103] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 18:11:44,631 INFO [RS:1;jenkins-hbase4:39103] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-22 18:11:44,632 INFO [RS:1;jenkins-hbase4:39103] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,632 DEBUG [RS:2;jenkins-hbase4:37903] zookeeper.ZKUtil(398): regionserver:37903-0x1018e3b64600003, quorum=127.0.0.1:56348, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-22 18:11:44,632 INFO [RS:2;jenkins-hbase4:37903] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-22 18:11:44,632 DEBUG [RS:1;jenkins-hbase4:39103] zookeeper.ZKUtil(398): regionserver:39103-0x1018e3b64600002, quorum=127.0.0.1:56348, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-22 18:11:44,632 INFO [RS:1;jenkins-hbase4:39103] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-22 18:11:44,632 INFO [RS:1;jenkins-hbase4:39103] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,632 INFO [RS:2;jenkins-hbase4:37903] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,632 INFO [RS:1;jenkins-hbase4:39103] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,632 INFO [RS:2;jenkins-hbase4:37903] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,636 INFO [RS:0;jenkins-hbase4:39145] regionserver.Replication(203): jenkins-hbase4.apache.org,39145,1690049503818 started 2023-07-22 18:11:44,636 INFO [RS:0;jenkins-hbase4:39145] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39145,1690049503818, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39145, sessionid=0x1018e3b64600001 2023-07-22 18:11:44,637 DEBUG [RS:0;jenkins-hbase4:39145] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 18:11:44,637 DEBUG [RS:0;jenkins-hbase4:39145] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39145,1690049503818 2023-07-22 18:11:44,637 DEBUG [RS:0;jenkins-hbase4:39145] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39145,1690049503818' 2023-07-22 18:11:44,637 DEBUG [RS:0;jenkins-hbase4:39145] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 18:11:44,637 DEBUG [RS:0;jenkins-hbase4:39145] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 18:11:44,637 DEBUG [RS:0;jenkins-hbase4:39145] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 18:11:44,637 DEBUG [RS:0;jenkins-hbase4:39145] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 18:11:44,637 DEBUG [RS:0;jenkins-hbase4:39145] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39145,1690049503818 2023-07-22 18:11:44,638 DEBUG [RS:0;jenkins-hbase4:39145] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39145,1690049503818' 2023-07-22 18:11:44,638 DEBUG [RS:0;jenkins-hbase4:39145] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 18:11:44,638 DEBUG [RS:0;jenkins-hbase4:39145] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 18:11:44,638 DEBUG [RS:0;jenkins-hbase4:39145] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 18:11:44,638 INFO [RS:0;jenkins-hbase4:39145] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-22 18:11:44,638 INFO [RS:0;jenkins-hbase4:39145] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,639 DEBUG [RS:0;jenkins-hbase4:39145] zookeeper.ZKUtil(398): regionserver:39145-0x1018e3b64600001, quorum=127.0.0.1:56348, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-22 18:11:44,639 INFO [RS:0;jenkins-hbase4:39145] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-22 18:11:44,639 INFO [RS:0;jenkins-hbase4:39145] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,639 INFO [RS:0;jenkins-hbase4:39145] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,736 INFO [RS:1;jenkins-hbase4:39103] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39103%2C1690049504000, suffix=, logDir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/WALs/jenkins-hbase4.apache.org,39103,1690049504000, archiveDir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/oldWALs, maxLogs=32 2023-07-22 18:11:44,736 INFO [RS:2;jenkins-hbase4:37903] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37903%2C1690049504159, suffix=, logDir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/WALs/jenkins-hbase4.apache.org,37903,1690049504159, archiveDir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/oldWALs, maxLogs=32 2023-07-22 18:11:44,740 INFO [RS:0;jenkins-hbase4:39145] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39145%2C1690049503818, suffix=, logDir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/WALs/jenkins-hbase4.apache.org,39145,1690049503818, archiveDir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/oldWALs, maxLogs=32 2023-07-22 18:11:44,746 DEBUG [jenkins-hbase4:33193] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-22 18:11:44,746 DEBUG [jenkins-hbase4:33193] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:44,747 DEBUG [jenkins-hbase4:33193] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:44,747 DEBUG [jenkins-hbase4:33193] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:44,747 DEBUG [jenkins-hbase4:33193] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:44,747 DEBUG [jenkins-hbase4:33193] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:44,748 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39103,1690049504000, state=OPENING 2023-07-22 18:11:44,749 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-22 18:11:44,750 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:44,751 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-22 18:11:44,753 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39103,1690049504000}] 2023-07-22 18:11:44,763 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38381,DS-8188052e-9db2-4ed8-909a-137120570805,DISK] 2023-07-22 18:11:44,763 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44783,DS-833ee8fe-7fc5-41f5-bc4a-1fd65941ee7e,DISK] 2023-07-22 18:11:44,763 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46689,DS-fbe6318b-a30c-4774-ad83-89e98d0c7af9,DISK] 2023-07-22 18:11:44,768 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46689,DS-fbe6318b-a30c-4774-ad83-89e98d0c7af9,DISK] 2023-07-22 18:11:44,769 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44783,DS-833ee8fe-7fc5-41f5-bc4a-1fd65941ee7e,DISK] 2023-07-22 18:11:44,769 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38381,DS-8188052e-9db2-4ed8-909a-137120570805,DISK] 2023-07-22 18:11:44,769 INFO [RS:1;jenkins-hbase4:39103] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/WALs/jenkins-hbase4.apache.org,39103,1690049504000/jenkins-hbase4.apache.org%2C39103%2C1690049504000.1690049504741 2023-07-22 18:11:44,770 DEBUG [RS:1;jenkins-hbase4:39103] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:46689,DS-fbe6318b-a30c-4774-ad83-89e98d0c7af9,DISK], DatanodeInfoWithStorage[127.0.0.1:38381,DS-8188052e-9db2-4ed8-909a-137120570805,DISK], DatanodeInfoWithStorage[127.0.0.1:44783,DS-833ee8fe-7fc5-41f5-bc4a-1fd65941ee7e,DISK]] 2023-07-22 18:11:44,777 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46689,DS-fbe6318b-a30c-4774-ad83-89e98d0c7af9,DISK] 2023-07-22 18:11:44,777 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44783,DS-833ee8fe-7fc5-41f5-bc4a-1fd65941ee7e,DISK] 2023-07-22 18:11:44,777 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38381,DS-8188052e-9db2-4ed8-909a-137120570805,DISK] 2023-07-22 18:11:44,779 WARN [ReadOnlyZKClient-127.0.0.1:56348@0x570c6203] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-22 18:11:44,779 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33193,1690049503617] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 18:11:44,781 INFO [RS:0;jenkins-hbase4:39145] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/WALs/jenkins-hbase4.apache.org,39145,1690049503818/jenkins-hbase4.apache.org%2C39145%2C1690049503818.1690049504743 2023-07-22 18:11:44,782 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42452, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 18:11:44,782 DEBUG [RS:0;jenkins-hbase4:39145] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38381,DS-8188052e-9db2-4ed8-909a-137120570805,DISK], DatanodeInfoWithStorage[127.0.0.1:46689,DS-fbe6318b-a30c-4774-ad83-89e98d0c7af9,DISK], DatanodeInfoWithStorage[127.0.0.1:44783,DS-833ee8fe-7fc5-41f5-bc4a-1fd65941ee7e,DISK]] 2023-07-22 18:11:44,783 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=39103] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:42452 deadline: 1690049564782, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,39103,1690049504000 2023-07-22 18:11:44,787 INFO [RS:2;jenkins-hbase4:37903] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/WALs/jenkins-hbase4.apache.org,37903,1690049504159/jenkins-hbase4.apache.org%2C37903%2C1690049504159.1690049504743 2023-07-22 18:11:44,787 DEBUG [RS:2;jenkins-hbase4:37903] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38381,DS-8188052e-9db2-4ed8-909a-137120570805,DISK], DatanodeInfoWithStorage[127.0.0.1:44783,DS-833ee8fe-7fc5-41f5-bc4a-1fd65941ee7e,DISK], DatanodeInfoWithStorage[127.0.0.1:46689,DS-fbe6318b-a30c-4774-ad83-89e98d0c7af9,DISK]] 2023-07-22 18:11:44,907 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39103,1690049504000 2023-07-22 18:11:44,909 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 18:11:44,910 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42462, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 18:11:44,915 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-22 18:11:44,915 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 18:11:44,916 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39103%2C1690049504000.meta, suffix=.meta, logDir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/WALs/jenkins-hbase4.apache.org,39103,1690049504000, archiveDir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/oldWALs, maxLogs=32 2023-07-22 18:11:44,934 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38381,DS-8188052e-9db2-4ed8-909a-137120570805,DISK] 2023-07-22 18:11:44,935 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44783,DS-833ee8fe-7fc5-41f5-bc4a-1fd65941ee7e,DISK] 2023-07-22 18:11:44,935 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:46689,DS-fbe6318b-a30c-4774-ad83-89e98d0c7af9,DISK] 2023-07-22 18:11:44,937 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/WALs/jenkins-hbase4.apache.org,39103,1690049504000/jenkins-hbase4.apache.org%2C39103%2C1690049504000.meta.1690049504917.meta 2023-07-22 18:11:44,937 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44783,DS-833ee8fe-7fc5-41f5-bc4a-1fd65941ee7e,DISK], DatanodeInfoWithStorage[127.0.0.1:46689,DS-fbe6318b-a30c-4774-ad83-89e98d0c7af9,DISK], DatanodeInfoWithStorage[127.0.0.1:38381,DS-8188052e-9db2-4ed8-909a-137120570805,DISK]] 2023-07-22 18:11:44,938 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:44,938 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-22 18:11:44,938 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-22 18:11:44,938 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-22 18:11:44,938 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-22 18:11:44,938 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:44,938 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-22 18:11:44,938 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-22 18:11:44,941 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-22 18:11:44,942 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/info 2023-07-22 18:11:44,942 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/info 2023-07-22 18:11:44,943 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-22 18:11:44,943 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:44,944 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-22 18:11:44,944 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/rep_barrier 2023-07-22 18:11:44,944 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/rep_barrier 2023-07-22 18:11:44,945 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-22 18:11:44,945 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:44,946 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-22 18:11:44,946 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/table 2023-07-22 18:11:44,946 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/table 2023-07-22 18:11:44,947 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-22 18:11:44,947 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:44,948 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740 2023-07-22 18:11:44,949 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740 2023-07-22 18:11:44,951 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-22 18:11:44,952 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-22 18:11:44,953 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10722077280, jitterRate=-0.0014287382364273071}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-22 18:11:44,953 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-22 18:11:44,954 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690049504907 2023-07-22 18:11:44,958 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-22 18:11:44,958 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-22 18:11:44,959 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,39103,1690049504000, state=OPEN 2023-07-22 18:11:44,960 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-22 18:11:44,960 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-22 18:11:44,961 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-22 18:11:44,961 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,39103,1690049504000 in 209 msec 2023-07-22 18:11:44,963 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-22 18:11:44,963 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 378 msec 2023-07-22 18:11:44,964 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 497 msec 2023-07-22 18:11:44,964 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690049504964, completionTime=-1 2023-07-22 18:11:44,964 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-22 18:11:44,964 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-22 18:11:44,970 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-22 18:11:44,970 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690049564970 2023-07-22 18:11:44,970 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690049624970 2023-07-22 18:11:44,970 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 5 msec 2023-07-22 18:11:44,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33193,1690049503617-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33193,1690049503617-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33193,1690049503617-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:33193, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:44,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-22 18:11:44,976 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:44,977 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-22 18:11:44,978 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-22 18:11:44,978 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 18:11:44,982 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 18:11:44,983 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/hbase/namespace/e87f08c7dab61bcb04e87b6fa049aaa1 2023-07-22 18:11:44,984 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/hbase/namespace/e87f08c7dab61bcb04e87b6fa049aaa1 empty. 2023-07-22 18:11:44,984 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/hbase/namespace/e87f08c7dab61bcb04e87b6fa049aaa1 2023-07-22 18:11:44,984 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-22 18:11:45,000 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:45,001 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => e87f08c7dab61bcb04e87b6fa049aaa1, NAME => 'hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp 2023-07-22 18:11:45,013 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:45,014 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing e87f08c7dab61bcb04e87b6fa049aaa1, disabling compactions & flushes 2023-07-22 18:11:45,014 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1. 2023-07-22 18:11:45,014 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1. 2023-07-22 18:11:45,014 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1. after waiting 0 ms 2023-07-22 18:11:45,014 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1. 2023-07-22 18:11:45,014 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1. 2023-07-22 18:11:45,014 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for e87f08c7dab61bcb04e87b6fa049aaa1: 2023-07-22 18:11:45,016 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 18:11:45,017 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690049505017"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049505017"}]},"ts":"1690049505017"} 2023-07-22 18:11:45,020 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 18:11:45,021 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 18:11:45,021 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049505021"}]},"ts":"1690049505021"} 2023-07-22 18:11:45,022 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-22 18:11:45,025 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:45,025 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:45,025 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:45,025 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:45,025 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:45,025 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e87f08c7dab61bcb04e87b6fa049aaa1, ASSIGN}] 2023-07-22 18:11:45,028 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=e87f08c7dab61bcb04e87b6fa049aaa1, ASSIGN 2023-07-22 18:11:45,028 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=e87f08c7dab61bcb04e87b6fa049aaa1, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39103,1690049504000; forceNewPlan=false, retain=false 2023-07-22 18:11:45,085 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33193,1690049503617] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:45,087 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33193,1690049503617] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-22 18:11:45,089 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 18:11:45,090 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 18:11:45,091 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/hbase/rsgroup/38e1416df82f316bbcffb47b6cb1b5aa 2023-07-22 18:11:45,092 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/hbase/rsgroup/38e1416df82f316bbcffb47b6cb1b5aa empty. 2023-07-22 18:11:45,092 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/hbase/rsgroup/38e1416df82f316bbcffb47b6cb1b5aa 2023-07-22 18:11:45,092 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-22 18:11:45,103 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:45,104 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 38e1416df82f316bbcffb47b6cb1b5aa, NAME => 'hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp 2023-07-22 18:11:45,113 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:45,114 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 38e1416df82f316bbcffb47b6cb1b5aa, disabling compactions & flushes 2023-07-22 18:11:45,114 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa. 2023-07-22 18:11:45,114 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa. 2023-07-22 18:11:45,114 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa. after waiting 0 ms 2023-07-22 18:11:45,114 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa. 2023-07-22 18:11:45,114 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa. 2023-07-22 18:11:45,114 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 38e1416df82f316bbcffb47b6cb1b5aa: 2023-07-22 18:11:45,116 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 18:11:45,117 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690049505117"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049505117"}]},"ts":"1690049505117"} 2023-07-22 18:11:45,118 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 18:11:45,119 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 18:11:45,119 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049505119"}]},"ts":"1690049505119"} 2023-07-22 18:11:45,120 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-22 18:11:45,124 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:45,124 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:45,124 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:45,124 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:45,124 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:45,124 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=38e1416df82f316bbcffb47b6cb1b5aa, ASSIGN}] 2023-07-22 18:11:45,128 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=38e1416df82f316bbcffb47b6cb1b5aa, ASSIGN 2023-07-22 18:11:45,129 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=38e1416df82f316bbcffb47b6cb1b5aa, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39145,1690049503818; forceNewPlan=false, retain=false 2023-07-22 18:11:45,129 INFO [jenkins-hbase4:33193] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-22 18:11:45,131 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e87f08c7dab61bcb04e87b6fa049aaa1, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39103,1690049504000 2023-07-22 18:11:45,131 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690049505131"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049505131"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049505131"}]},"ts":"1690049505131"} 2023-07-22 18:11:45,131 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=38e1416df82f316bbcffb47b6cb1b5aa, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39145,1690049503818 2023-07-22 18:11:45,131 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690049505131"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049505131"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049505131"}]},"ts":"1690049505131"} 2023-07-22 18:11:45,132 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure e87f08c7dab61bcb04e87b6fa049aaa1, server=jenkins-hbase4.apache.org,39103,1690049504000}] 2023-07-22 18:11:45,133 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 38e1416df82f316bbcffb47b6cb1b5aa, server=jenkins-hbase4.apache.org,39145,1690049503818}] 2023-07-22 18:11:45,286 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39145,1690049503818 2023-07-22 18:11:45,286 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 18:11:45,288 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35126, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 18:11:45,288 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1. 2023-07-22 18:11:45,289 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => e87f08c7dab61bcb04e87b6fa049aaa1, NAME => 'hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:45,289 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace e87f08c7dab61bcb04e87b6fa049aaa1 2023-07-22 18:11:45,289 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:45,289 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for e87f08c7dab61bcb04e87b6fa049aaa1 2023-07-22 18:11:45,289 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for e87f08c7dab61bcb04e87b6fa049aaa1 2023-07-22 18:11:45,291 INFO [StoreOpener-e87f08c7dab61bcb04e87b6fa049aaa1-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region e87f08c7dab61bcb04e87b6fa049aaa1 2023-07-22 18:11:45,292 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa. 2023-07-22 18:11:45,292 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 38e1416df82f316bbcffb47b6cb1b5aa, NAME => 'hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:45,292 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-22 18:11:45,292 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa. service=MultiRowMutationService 2023-07-22 18:11:45,292 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-22 18:11:45,292 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 38e1416df82f316bbcffb47b6cb1b5aa 2023-07-22 18:11:45,292 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:45,293 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 38e1416df82f316bbcffb47b6cb1b5aa 2023-07-22 18:11:45,293 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 38e1416df82f316bbcffb47b6cb1b5aa 2023-07-22 18:11:45,293 DEBUG [StoreOpener-e87f08c7dab61bcb04e87b6fa049aaa1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/namespace/e87f08c7dab61bcb04e87b6fa049aaa1/info 2023-07-22 18:11:45,293 DEBUG [StoreOpener-e87f08c7dab61bcb04e87b6fa049aaa1-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/namespace/e87f08c7dab61bcb04e87b6fa049aaa1/info 2023-07-22 18:11:45,293 INFO [StoreOpener-e87f08c7dab61bcb04e87b6fa049aaa1-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region e87f08c7dab61bcb04e87b6fa049aaa1 columnFamilyName info 2023-07-22 18:11:45,294 INFO [StoreOpener-e87f08c7dab61bcb04e87b6fa049aaa1-1] regionserver.HStore(310): Store=e87f08c7dab61bcb04e87b6fa049aaa1/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:45,294 INFO [StoreOpener-38e1416df82f316bbcffb47b6cb1b5aa-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 38e1416df82f316bbcffb47b6cb1b5aa 2023-07-22 18:11:45,295 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/namespace/e87f08c7dab61bcb04e87b6fa049aaa1 2023-07-22 18:11:45,295 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/namespace/e87f08c7dab61bcb04e87b6fa049aaa1 2023-07-22 18:11:45,296 DEBUG [StoreOpener-38e1416df82f316bbcffb47b6cb1b5aa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/rsgroup/38e1416df82f316bbcffb47b6cb1b5aa/m 2023-07-22 18:11:45,296 DEBUG [StoreOpener-38e1416df82f316bbcffb47b6cb1b5aa-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/rsgroup/38e1416df82f316bbcffb47b6cb1b5aa/m 2023-07-22 18:11:45,296 INFO [StoreOpener-38e1416df82f316bbcffb47b6cb1b5aa-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 38e1416df82f316bbcffb47b6cb1b5aa columnFamilyName m 2023-07-22 18:11:45,297 INFO [StoreOpener-38e1416df82f316bbcffb47b6cb1b5aa-1] regionserver.HStore(310): Store=38e1416df82f316bbcffb47b6cb1b5aa/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:45,297 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/rsgroup/38e1416df82f316bbcffb47b6cb1b5aa 2023-07-22 18:11:45,298 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/rsgroup/38e1416df82f316bbcffb47b6cb1b5aa 2023-07-22 18:11:45,298 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for e87f08c7dab61bcb04e87b6fa049aaa1 2023-07-22 18:11:45,300 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/namespace/e87f08c7dab61bcb04e87b6fa049aaa1/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:45,301 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened e87f08c7dab61bcb04e87b6fa049aaa1; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9443458240, jitterRate=-0.12050941586494446}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:45,301 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 38e1416df82f316bbcffb47b6cb1b5aa 2023-07-22 18:11:45,301 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for e87f08c7dab61bcb04e87b6fa049aaa1: 2023-07-22 18:11:45,302 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1., pid=8, masterSystemTime=1690049505284 2023-07-22 18:11:45,305 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1. 2023-07-22 18:11:45,305 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1. 2023-07-22 18:11:45,306 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/rsgroup/38e1416df82f316bbcffb47b6cb1b5aa/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:45,306 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=e87f08c7dab61bcb04e87b6fa049aaa1, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39103,1690049504000 2023-07-22 18:11:45,306 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690049505306"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049505306"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049505306"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049505306"}]},"ts":"1690049505306"} 2023-07-22 18:11:45,306 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 38e1416df82f316bbcffb47b6cb1b5aa; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@20a5c97, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:45,306 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 38e1416df82f316bbcffb47b6cb1b5aa: 2023-07-22 18:11:45,307 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa., pid=9, masterSystemTime=1690049505286 2023-07-22 18:11:45,310 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa. 2023-07-22 18:11:45,311 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa. 2023-07-22 18:11:45,311 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=38e1416df82f316bbcffb47b6cb1b5aa, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39145,1690049503818 2023-07-22 18:11:45,311 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690049505311"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049505311"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049505311"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049505311"}]},"ts":"1690049505311"} 2023-07-22 18:11:45,312 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-22 18:11:45,312 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure e87f08c7dab61bcb04e87b6fa049aaa1, server=jenkins-hbase4.apache.org,39103,1690049504000 in 177 msec 2023-07-22 18:11:45,314 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-22 18:11:45,314 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=e87f08c7dab61bcb04e87b6fa049aaa1, ASSIGN in 287 msec 2023-07-22 18:11:45,314 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-22 18:11:45,314 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 38e1416df82f316bbcffb47b6cb1b5aa, server=jenkins-hbase4.apache.org,39145,1690049503818 in 180 msec 2023-07-22 18:11:45,315 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 18:11:45,315 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049505315"}]},"ts":"1690049505315"} 2023-07-22 18:11:45,316 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-22 18:11:45,316 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-22 18:11:45,317 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=38e1416df82f316bbcffb47b6cb1b5aa, ASSIGN in 190 msec 2023-07-22 18:11:45,317 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 18:11:45,317 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049505317"}]},"ts":"1690049505317"} 2023-07-22 18:11:45,319 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 18:11:45,319 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-22 18:11:45,320 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 343 msec 2023-07-22 18:11:45,321 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 18:11:45,322 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 236 msec 2023-07-22 18:11:45,378 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-22 18:11:45,383 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-22 18:11:45,383 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:45,388 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-22 18:11:45,390 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33193,1690049503617] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 18:11:45,392 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35128, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 18:11:45,396 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33193,1690049503617] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-22 18:11:45,396 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33193,1690049503617] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-22 18:11:45,400 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 18:11:45,401 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:45,401 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33193,1690049503617] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:45,403 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 13 msec 2023-07-22 18:11:45,404 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33193,1690049503617] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-22 18:11:45,406 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,33193,1690049503617] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-22 18:11:45,410 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-22 18:11:45,416 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 18:11:45,418 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 8 msec 2023-07-22 18:11:45,425 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-22 18:11:45,426 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-22 18:11:45,427 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.111sec 2023-07-22 18:11:45,427 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-22 18:11:45,427 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:45,428 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-22 18:11:45,428 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-22 18:11:45,430 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 18:11:45,430 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 18:11:45,432 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/hbase/quota/1cb5a49f5a3118e10cdef510ff05b8d3 2023-07-22 18:11:45,432 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-22 18:11:45,432 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/hbase/quota/1cb5a49f5a3118e10cdef510ff05b8d3 empty. 2023-07-22 18:11:45,433 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/hbase/quota/1cb5a49f5a3118e10cdef510ff05b8d3 2023-07-22 18:11:45,433 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-22 18:11:45,437 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-22 18:11:45,438 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-22 18:11:45,440 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:45,441 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:45,441 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-22 18:11:45,441 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-22 18:11:45,441 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33193,1690049503617-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-22 18:11:45,441 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33193,1690049503617-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-22 18:11:45,503 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-22 18:11:45,504 DEBUG [Listener at localhost/38083] zookeeper.ReadOnlyZKClient(139): Connect 0x2eaccd28 to 127.0.0.1:56348 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:45,509 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:45,511 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1cb5a49f5a3118e10cdef510ff05b8d3, NAME => 'hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp 2023-07-22 18:11:45,512 DEBUG [Listener at localhost/38083] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@649dfbc5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:45,515 DEBUG [hconnection-0x169a71c-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 18:11:45,518 INFO [RS-EventLoopGroup-10-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42466, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 18:11:45,520 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,33193,1690049503617 2023-07-22 18:11:45,520 INFO [Listener at localhost/38083] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:45,532 DEBUG [Listener at localhost/38083] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-22 18:11:45,534 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58538, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-22 18:11:45,537 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-22 18:11:45,537 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:45,539 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-22 18:11:45,540 DEBUG [Listener at localhost/38083] zookeeper.ReadOnlyZKClient(139): Connect 0x456bd6f7 to 127.0.0.1:56348 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:45,549 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:45,549 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing 1cb5a49f5a3118e10cdef510ff05b8d3, disabling compactions & flushes 2023-07-22 18:11:45,549 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3. 2023-07-22 18:11:45,549 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3. 2023-07-22 18:11:45,549 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3. after waiting 0 ms 2023-07-22 18:11:45,549 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3. 2023-07-22 18:11:45,549 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3. 2023-07-22 18:11:45,549 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for 1cb5a49f5a3118e10cdef510ff05b8d3: 2023-07-22 18:11:45,552 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 18:11:45,552 DEBUG [Listener at localhost/38083] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@976c801, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:45,553 INFO [Listener at localhost/38083] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:56348 2023-07-22 18:11:45,553 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690049505553"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049505553"}]},"ts":"1690049505553"} 2023-07-22 18:11:45,554 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 18:11:45,556 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 18:11:45,556 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049505556"}]},"ts":"1690049505556"} 2023-07-22 18:11:45,557 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-22 18:11:45,563 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:45,563 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:45,563 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:45,563 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:45,563 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:45,563 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=1cb5a49f5a3118e10cdef510ff05b8d3, ASSIGN}] 2023-07-22 18:11:45,565 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=1cb5a49f5a3118e10cdef510ff05b8d3, ASSIGN 2023-07-22 18:11:45,574 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=1cb5a49f5a3118e10cdef510ff05b8d3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39145,1690049503818; forceNewPlan=false, retain=false 2023-07-22 18:11:45,577 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 18:11:45,578 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-22 18:11:45,596 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1018e3b6460000a connected 2023-07-22 18:11:45,596 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-22 18:11:45,604 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-22 18:11:45,613 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 18:11:45,616 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 22 msec 2023-07-22 18:11:45,705 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-22 18:11:45,711 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:45,712 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-22 18:11:45,714 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 18:11:45,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 15 2023-07-22 18:11:45,715 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-22 18:11:45,716 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:45,717 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-22 18:11:45,718 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 18:11:45,720 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/np1/table1/9ca5dd9d1796210375c759b21f9312f7 2023-07-22 18:11:45,721 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/np1/table1/9ca5dd9d1796210375c759b21f9312f7 empty. 2023-07-22 18:11:45,721 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/np1/table1/9ca5dd9d1796210375c759b21f9312f7 2023-07-22 18:11:45,721 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-22 18:11:45,724 INFO [jenkins-hbase4:33193] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 18:11:45,725 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=1cb5a49f5a3118e10cdef510ff05b8d3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39145,1690049503818 2023-07-22 18:11:45,726 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690049505725"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049505725"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049505725"}]},"ts":"1690049505725"} 2023-07-22 18:11:45,727 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=13, state=RUNNABLE; OpenRegionProcedure 1cb5a49f5a3118e10cdef510ff05b8d3, server=jenkins-hbase4.apache.org,39145,1690049503818}] 2023-07-22 18:11:45,735 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:45,737 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9ca5dd9d1796210375c759b21f9312f7, NAME => 'np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp 2023-07-22 18:11:45,752 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:45,752 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 9ca5dd9d1796210375c759b21f9312f7, disabling compactions & flushes 2023-07-22 18:11:45,752 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7. 2023-07-22 18:11:45,752 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7. 2023-07-22 18:11:45,752 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7. after waiting 0 ms 2023-07-22 18:11:45,752 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7. 2023-07-22 18:11:45,752 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7. 2023-07-22 18:11:45,752 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 9ca5dd9d1796210375c759b21f9312f7: 2023-07-22 18:11:45,759 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 18:11:45,762 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690049505762"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049505762"}]},"ts":"1690049505762"} 2023-07-22 18:11:45,764 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 18:11:45,765 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 18:11:45,765 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049505765"}]},"ts":"1690049505765"} 2023-07-22 18:11:45,767 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-22 18:11:45,771 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:45,771 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:45,771 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:45,771 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:45,771 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:45,771 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=9ca5dd9d1796210375c759b21f9312f7, ASSIGN}] 2023-07-22 18:11:45,772 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=9ca5dd9d1796210375c759b21f9312f7, ASSIGN 2023-07-22 18:11:45,773 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=15, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=9ca5dd9d1796210375c759b21f9312f7, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39103,1690049504000; forceNewPlan=false, retain=false 2023-07-22 18:11:45,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-22 18:11:45,883 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3. 2023-07-22 18:11:45,883 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1cb5a49f5a3118e10cdef510ff05b8d3, NAME => 'hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:45,883 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota 1cb5a49f5a3118e10cdef510ff05b8d3 2023-07-22 18:11:45,884 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:45,884 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1cb5a49f5a3118e10cdef510ff05b8d3 2023-07-22 18:11:45,884 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1cb5a49f5a3118e10cdef510ff05b8d3 2023-07-22 18:11:45,885 INFO [StoreOpener-1cb5a49f5a3118e10cdef510ff05b8d3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region 1cb5a49f5a3118e10cdef510ff05b8d3 2023-07-22 18:11:45,886 DEBUG [StoreOpener-1cb5a49f5a3118e10cdef510ff05b8d3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/quota/1cb5a49f5a3118e10cdef510ff05b8d3/q 2023-07-22 18:11:45,887 DEBUG [StoreOpener-1cb5a49f5a3118e10cdef510ff05b8d3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/quota/1cb5a49f5a3118e10cdef510ff05b8d3/q 2023-07-22 18:11:45,887 INFO [StoreOpener-1cb5a49f5a3118e10cdef510ff05b8d3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1cb5a49f5a3118e10cdef510ff05b8d3 columnFamilyName q 2023-07-22 18:11:45,887 INFO [StoreOpener-1cb5a49f5a3118e10cdef510ff05b8d3-1] regionserver.HStore(310): Store=1cb5a49f5a3118e10cdef510ff05b8d3/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:45,887 INFO [StoreOpener-1cb5a49f5a3118e10cdef510ff05b8d3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region 1cb5a49f5a3118e10cdef510ff05b8d3 2023-07-22 18:11:45,889 DEBUG [StoreOpener-1cb5a49f5a3118e10cdef510ff05b8d3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/quota/1cb5a49f5a3118e10cdef510ff05b8d3/u 2023-07-22 18:11:45,889 DEBUG [StoreOpener-1cb5a49f5a3118e10cdef510ff05b8d3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/quota/1cb5a49f5a3118e10cdef510ff05b8d3/u 2023-07-22 18:11:45,889 INFO [StoreOpener-1cb5a49f5a3118e10cdef510ff05b8d3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1cb5a49f5a3118e10cdef510ff05b8d3 columnFamilyName u 2023-07-22 18:11:45,890 INFO [StoreOpener-1cb5a49f5a3118e10cdef510ff05b8d3-1] regionserver.HStore(310): Store=1cb5a49f5a3118e10cdef510ff05b8d3/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:45,891 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/quota/1cb5a49f5a3118e10cdef510ff05b8d3 2023-07-22 18:11:45,891 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/quota/1cb5a49f5a3118e10cdef510ff05b8d3 2023-07-22 18:11:45,893 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-22 18:11:45,895 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1cb5a49f5a3118e10cdef510ff05b8d3 2023-07-22 18:11:45,897 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/quota/1cb5a49f5a3118e10cdef510ff05b8d3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:45,897 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1cb5a49f5a3118e10cdef510ff05b8d3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11506944160, jitterRate=0.07166768610477448}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-22 18:11:45,897 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1cb5a49f5a3118e10cdef510ff05b8d3: 2023-07-22 18:11:45,898 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3., pid=16, masterSystemTime=1690049505879 2023-07-22 18:11:45,899 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3. 2023-07-22 18:11:45,899 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3. 2023-07-22 18:11:45,900 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=1cb5a49f5a3118e10cdef510ff05b8d3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39145,1690049503818 2023-07-22 18:11:45,900 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1690049505900"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049505900"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049505900"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049505900"}]},"ts":"1690049505900"} 2023-07-22 18:11:45,903 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=13 2023-07-22 18:11:45,903 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=13, state=SUCCESS; OpenRegionProcedure 1cb5a49f5a3118e10cdef510ff05b8d3, server=jenkins-hbase4.apache.org,39145,1690049503818 in 174 msec 2023-07-22 18:11:45,904 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-22 18:11:45,904 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=1cb5a49f5a3118e10cdef510ff05b8d3, ASSIGN in 340 msec 2023-07-22 18:11:45,905 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 18:11:45,905 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049505905"}]},"ts":"1690049505905"} 2023-07-22 18:11:45,906 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-22 18:11:45,908 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 18:11:45,909 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 481 msec 2023-07-22 18:11:45,923 INFO [jenkins-hbase4:33193] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 18:11:45,924 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=9ca5dd9d1796210375c759b21f9312f7, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39103,1690049504000 2023-07-22 18:11:45,924 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690049505924"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049505924"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049505924"}]},"ts":"1690049505924"} 2023-07-22 18:11:45,926 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 9ca5dd9d1796210375c759b21f9312f7, server=jenkins-hbase4.apache.org,39103,1690049504000}] 2023-07-22 18:11:46,017 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-22 18:11:46,081 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7. 2023-07-22 18:11:46,081 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9ca5dd9d1796210375c759b21f9312f7, NAME => 'np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:46,081 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 9ca5dd9d1796210375c759b21f9312f7 2023-07-22 18:11:46,082 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:46,082 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9ca5dd9d1796210375c759b21f9312f7 2023-07-22 18:11:46,082 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9ca5dd9d1796210375c759b21f9312f7 2023-07-22 18:11:46,083 INFO [StoreOpener-9ca5dd9d1796210375c759b21f9312f7-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 9ca5dd9d1796210375c759b21f9312f7 2023-07-22 18:11:46,084 DEBUG [StoreOpener-9ca5dd9d1796210375c759b21f9312f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/np1/table1/9ca5dd9d1796210375c759b21f9312f7/fam1 2023-07-22 18:11:46,084 DEBUG [StoreOpener-9ca5dd9d1796210375c759b21f9312f7-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/np1/table1/9ca5dd9d1796210375c759b21f9312f7/fam1 2023-07-22 18:11:46,085 INFO [StoreOpener-9ca5dd9d1796210375c759b21f9312f7-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9ca5dd9d1796210375c759b21f9312f7 columnFamilyName fam1 2023-07-22 18:11:46,085 INFO [StoreOpener-9ca5dd9d1796210375c759b21f9312f7-1] regionserver.HStore(310): Store=9ca5dd9d1796210375c759b21f9312f7/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:46,086 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/np1/table1/9ca5dd9d1796210375c759b21f9312f7 2023-07-22 18:11:46,086 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/np1/table1/9ca5dd9d1796210375c759b21f9312f7 2023-07-22 18:11:46,089 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9ca5dd9d1796210375c759b21f9312f7 2023-07-22 18:11:46,091 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/np1/table1/9ca5dd9d1796210375c759b21f9312f7/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:46,092 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9ca5dd9d1796210375c759b21f9312f7; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9499759680, jitterRate=-0.11526593565940857}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:46,092 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9ca5dd9d1796210375c759b21f9312f7: 2023-07-22 18:11:46,092 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7., pid=18, masterSystemTime=1690049506077 2023-07-22 18:11:46,094 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7. 2023-07-22 18:11:46,094 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7. 2023-07-22 18:11:46,094 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=9ca5dd9d1796210375c759b21f9312f7, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39103,1690049504000 2023-07-22 18:11:46,094 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690049506094"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049506094"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049506094"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049506094"}]},"ts":"1690049506094"} 2023-07-22 18:11:46,097 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-22 18:11:46,097 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 9ca5dd9d1796210375c759b21f9312f7, server=jenkins-hbase4.apache.org,39103,1690049504000 in 169 msec 2023-07-22 18:11:46,098 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=15 2023-07-22 18:11:46,098 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=15, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=9ca5dd9d1796210375c759b21f9312f7, ASSIGN in 326 msec 2023-07-22 18:11:46,099 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 18:11:46,099 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049506099"}]},"ts":"1690049506099"} 2023-07-22 18:11:46,100 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-22 18:11:46,102 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 18:11:46,103 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=15, state=SUCCESS; CreateTableProcedure table=np1:table1 in 391 msec 2023-07-22 18:11:46,318 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.MasterRpcServices(1230): Checking to see if procedure is done pid=15 2023-07-22 18:11:46,318 INFO [Listener at localhost/38083] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 15 completed 2023-07-22 18:11:46,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:46,321 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-22 18:11:46,322 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 18:11:46,323 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-22 18:11:46,323 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-22 18:11:46,341 INFO [PEWorker-3] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=20 msec 2023-07-22 18:11:46,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-22 18:11:46,429 INFO [Listener at localhost/38083] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-22 18:11:46,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:46,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:46,431 INFO [Listener at localhost/38083] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-22 18:11:46,432 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-22 18:11:46,432 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-22 18:11:46,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-22 18:11:46,435 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049506435"}]},"ts":"1690049506435"} 2023-07-22 18:11:46,436 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-22 18:11:46,439 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-22 18:11:46,440 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=9ca5dd9d1796210375c759b21f9312f7, UNASSIGN}] 2023-07-22 18:11:46,440 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=9ca5dd9d1796210375c759b21f9312f7, UNASSIGN 2023-07-22 18:11:46,441 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=9ca5dd9d1796210375c759b21f9312f7, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39103,1690049504000 2023-07-22 18:11:46,441 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690049506441"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049506441"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049506441"}]},"ts":"1690049506441"} 2023-07-22 18:11:46,442 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 9ca5dd9d1796210375c759b21f9312f7, server=jenkins-hbase4.apache.org,39103,1690049504000}] 2023-07-22 18:11:46,535 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-22 18:11:46,594 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9ca5dd9d1796210375c759b21f9312f7 2023-07-22 18:11:46,595 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9ca5dd9d1796210375c759b21f9312f7, disabling compactions & flushes 2023-07-22 18:11:46,595 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7. 2023-07-22 18:11:46,595 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7. 2023-07-22 18:11:46,595 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7. after waiting 0 ms 2023-07-22 18:11:46,596 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7. 2023-07-22 18:11:46,599 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/np1/table1/9ca5dd9d1796210375c759b21f9312f7/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:46,600 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7. 2023-07-22 18:11:46,600 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9ca5dd9d1796210375c759b21f9312f7: 2023-07-22 18:11:46,602 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9ca5dd9d1796210375c759b21f9312f7 2023-07-22 18:11:46,602 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=9ca5dd9d1796210375c759b21f9312f7, regionState=CLOSED 2023-07-22 18:11:46,603 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690049506602"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049506602"}]},"ts":"1690049506602"} 2023-07-22 18:11:46,607 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-22 18:11:46,607 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 9ca5dd9d1796210375c759b21f9312f7, server=jenkins-hbase4.apache.org,39103,1690049504000 in 162 msec 2023-07-22 18:11:46,609 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-22 18:11:46,609 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=9ca5dd9d1796210375c759b21f9312f7, UNASSIGN in 167 msec 2023-07-22 18:11:46,610 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049506610"}]},"ts":"1690049506610"} 2023-07-22 18:11:46,611 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-22 18:11:46,612 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-22 18:11:46,614 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 181 msec 2023-07-22 18:11:46,736 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-22 18:11:46,737 INFO [Listener at localhost/38083] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-22 18:11:46,737 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-22 18:11:46,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-22 18:11:46,740 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-22 18:11:46,740 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-22 18:11:46,741 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-22 18:11:46,742 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:46,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-22 18:11:46,744 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/np1/table1/9ca5dd9d1796210375c759b21f9312f7 2023-07-22 18:11:46,746 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/np1/table1/9ca5dd9d1796210375c759b21f9312f7/fam1, FileablePath, hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/np1/table1/9ca5dd9d1796210375c759b21f9312f7/recovered.edits] 2023-07-22 18:11:46,746 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-22 18:11:46,751 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/np1/table1/9ca5dd9d1796210375c759b21f9312f7/recovered.edits/4.seqid to hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/archive/data/np1/table1/9ca5dd9d1796210375c759b21f9312f7/recovered.edits/4.seqid 2023-07-22 18:11:46,751 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/.tmp/data/np1/table1/9ca5dd9d1796210375c759b21f9312f7 2023-07-22 18:11:46,751 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-22 18:11:46,754 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-22 18:11:46,755 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-22 18:11:46,757 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-22 18:11:46,758 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-22 18:11:46,758 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-22 18:11:46,758 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049506758"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:46,759 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-22 18:11:46,759 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 9ca5dd9d1796210375c759b21f9312f7, NAME => 'np1:table1,,1690049505710.9ca5dd9d1796210375c759b21f9312f7.', STARTKEY => '', ENDKEY => ''}] 2023-07-22 18:11:46,759 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-22 18:11:46,759 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690049506759"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:46,761 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-22 18:11:46,764 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-22 18:11:46,765 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 27 msec 2023-07-22 18:11:46,847 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-22 18:11:46,848 INFO [Listener at localhost/38083] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-22 18:11:46,853 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-22 18:11:46,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-22 18:11:46,862 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-22 18:11:46,865 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-22 18:11:46,867 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-22 18:11:46,867 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-22 18:11:46,871 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-22 18:11:46,871 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 18:11:46,872 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-22 18:11:46,873 INFO [PEWorker-2] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-22 18:11:46,874 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 19 msec 2023-07-22 18:11:46,968 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33193] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-22 18:11:46,969 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-22 18:11:46,969 INFO [Listener at localhost/38083] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-22 18:11:46,969 DEBUG [Listener at localhost/38083] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2eaccd28 to 127.0.0.1:56348 2023-07-22 18:11:46,969 DEBUG [Listener at localhost/38083] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:46,969 DEBUG [Listener at localhost/38083] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-22 18:11:46,969 DEBUG [Listener at localhost/38083] util.JVMClusterUtil(257): Found active master hash=1653439855, stopped=false 2023-07-22 18:11:46,969 DEBUG [Listener at localhost/38083] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-22 18:11:46,970 DEBUG [Listener at localhost/38083] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-22 18:11:46,970 DEBUG [Listener at localhost/38083] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-22 18:11:46,970 INFO [Listener at localhost/38083] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,33193,1690049503617 2023-07-22 18:11:46,971 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:39145-0x1018e3b64600001, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:46,971 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:37903-0x1018e3b64600003, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:46,971 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:46,972 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:46,971 INFO [Listener at localhost/38083] procedure2.ProcedureExecutor(629): Stopping 2023-07-22 18:11:46,971 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:39103-0x1018e3b64600002, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:46,972 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39145-0x1018e3b64600001, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:46,972 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37903-0x1018e3b64600003, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:46,972 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:46,974 DEBUG [Listener at localhost/38083] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x570c6203 to 127.0.0.1:56348 2023-07-22 18:11:46,974 DEBUG [Listener at localhost/38083] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:46,974 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39103-0x1018e3b64600002, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:46,974 INFO [Listener at localhost/38083] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39145,1690049503818' ***** 2023-07-22 18:11:46,974 INFO [Listener at localhost/38083] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 18:11:46,974 INFO [Listener at localhost/38083] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39103,1690049504000' ***** 2023-07-22 18:11:46,974 INFO [Listener at localhost/38083] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 18:11:46,974 INFO [RS:0;jenkins-hbase4:39145] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 18:11:46,974 INFO [Listener at localhost/38083] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37903,1690049504159' ***** 2023-07-22 18:11:46,974 INFO [Listener at localhost/38083] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 18:11:46,974 INFO [RS:1;jenkins-hbase4:39103] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 18:11:46,975 INFO [RS:2;jenkins-hbase4:37903] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 18:11:46,990 INFO [RS:1;jenkins-hbase4:39103] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7aa65829{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:46,990 INFO [RS:0;jenkins-hbase4:39145] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5230b41e{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:46,990 INFO [RS:2;jenkins-hbase4:37903] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5c04004{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:46,991 INFO [RS:1;jenkins-hbase4:39103] server.AbstractConnector(383): Stopped ServerConnector@4fbf3e9c{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 18:11:46,991 INFO [RS:2;jenkins-hbase4:37903] server.AbstractConnector(383): Stopped ServerConnector@53586b{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 18:11:46,991 INFO [RS:0;jenkins-hbase4:39145] server.AbstractConnector(383): Stopped ServerConnector@6d4587e2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 18:11:46,991 INFO [RS:1;jenkins-hbase4:39103] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 18:11:46,991 INFO [RS:2;jenkins-hbase4:37903] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 18:11:46,991 INFO [RS:0;jenkins-hbase4:39145] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 18:11:46,992 INFO [RS:1;jenkins-hbase4:39103] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7622fabc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 18:11:46,994 INFO [RS:0;jenkins-hbase4:39145] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@67ef1e6a{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 18:11:46,994 INFO [RS:2;jenkins-hbase4:37903] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2768d20d{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 18:11:46,995 INFO [RS:0;jenkins-hbase4:39145] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@68afaa1c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/hadoop.log.dir/,STOPPED} 2023-07-22 18:11:46,995 INFO [RS:1;jenkins-hbase4:39103] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@575948b0{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/hadoop.log.dir/,STOPPED} 2023-07-22 18:11:46,995 INFO [RS:2;jenkins-hbase4:37903] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@521b5309{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/hadoop.log.dir/,STOPPED} 2023-07-22 18:11:46,995 INFO [RS:2;jenkins-hbase4:37903] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 18:11:46,996 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 18:11:46,996 INFO [RS:1;jenkins-hbase4:39103] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 18:11:46,996 INFO [RS:2;jenkins-hbase4:37903] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 18:11:46,996 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 18:11:46,996 INFO [RS:1;jenkins-hbase4:39103] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 18:11:46,997 INFO [RS:0;jenkins-hbase4:39145] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 18:11:46,996 INFO [RS:2;jenkins-hbase4:37903] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 18:11:46,997 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 18:11:46,997 INFO [RS:2;jenkins-hbase4:37903] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37903,1690049504159 2023-07-22 18:11:46,997 INFO [RS:0;jenkins-hbase4:39145] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 18:11:46,998 DEBUG [RS:2;jenkins-hbase4:37903] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6061f652 to 127.0.0.1:56348 2023-07-22 18:11:46,997 INFO [RS:1;jenkins-hbase4:39103] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 18:11:46,998 DEBUG [RS:2;jenkins-hbase4:37903] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:46,998 INFO [RS:1;jenkins-hbase4:39103] regionserver.HRegionServer(3305): Received CLOSE for e87f08c7dab61bcb04e87b6fa049aaa1 2023-07-22 18:11:46,998 INFO [RS:0;jenkins-hbase4:39145] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 18:11:46,998 INFO [RS:2;jenkins-hbase4:37903] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37903,1690049504159; all regions closed. 2023-07-22 18:11:46,998 INFO [RS:0;jenkins-hbase4:39145] regionserver.HRegionServer(3305): Received CLOSE for 38e1416df82f316bbcffb47b6cb1b5aa 2023-07-22 18:11:46,998 DEBUG [RS:2;jenkins-hbase4:37903] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-22 18:11:46,998 INFO [RS:1;jenkins-hbase4:39103] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39103,1690049504000 2023-07-22 18:11:46,999 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing e87f08c7dab61bcb04e87b6fa049aaa1, disabling compactions & flushes 2023-07-22 18:11:46,999 DEBUG [RS:1;jenkins-hbase4:39103] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x57fcee06 to 127.0.0.1:56348 2023-07-22 18:11:46,999 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1. 2023-07-22 18:11:46,999 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1. 2023-07-22 18:11:46,999 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1. after waiting 0 ms 2023-07-22 18:11:46,999 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1. 2023-07-22 18:11:46,999 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing e87f08c7dab61bcb04e87b6fa049aaa1 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-22 18:11:46,999 DEBUG [RS:1;jenkins-hbase4:39103] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:47,000 INFO [RS:1;jenkins-hbase4:39103] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 18:11:47,000 INFO [RS:1;jenkins-hbase4:39103] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 18:11:47,001 INFO [RS:1;jenkins-hbase4:39103] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 18:11:47,001 INFO [RS:1;jenkins-hbase4:39103] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-22 18:11:47,002 INFO [RS:0;jenkins-hbase4:39145] regionserver.HRegionServer(3305): Received CLOSE for 1cb5a49f5a3118e10cdef510ff05b8d3 2023-07-22 18:11:47,002 INFO [RS:0;jenkins-hbase4:39145] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39145,1690049503818 2023-07-22 18:11:47,003 DEBUG [RS:0;jenkins-hbase4:39145] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x54482ba2 to 127.0.0.1:56348 2023-07-22 18:11:47,003 DEBUG [RS:0;jenkins-hbase4:39145] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:47,003 INFO [RS:0;jenkins-hbase4:39145] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-22 18:11:47,003 INFO [RS:1;jenkins-hbase4:39103] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-22 18:11:47,005 DEBUG [RS:1;jenkins-hbase4:39103] regionserver.HRegionServer(1478): Online Regions={e87f08c7dab61bcb04e87b6fa049aaa1=hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1., 1588230740=hbase:meta,,1.1588230740} 2023-07-22 18:11:47,005 DEBUG [RS:1;jenkins-hbase4:39103] regionserver.HRegionServer(1504): Waiting on 1588230740, e87f08c7dab61bcb04e87b6fa049aaa1 2023-07-22 18:11:47,005 DEBUG [RS:0;jenkins-hbase4:39145] regionserver.HRegionServer(1478): Online Regions={38e1416df82f316bbcffb47b6cb1b5aa=hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa., 1cb5a49f5a3118e10cdef510ff05b8d3=hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3.} 2023-07-22 18:11:47,005 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:47,006 DEBUG [RS:0;jenkins-hbase4:39145] regionserver.HRegionServer(1504): Waiting on 1cb5a49f5a3118e10cdef510ff05b8d3, 38e1416df82f316bbcffb47b6cb1b5aa 2023-07-22 18:11:47,005 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 38e1416df82f316bbcffb47b6cb1b5aa, disabling compactions & flushes 2023-07-22 18:11:47,006 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa. 2023-07-22 18:11:47,006 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa. 2023-07-22 18:11:47,006 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa. after waiting 0 ms 2023-07-22 18:11:47,007 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa. 2023-07-22 18:11:47,007 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 38e1416df82f316bbcffb47b6cb1b5aa 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-22 18:11:47,005 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-22 18:11:47,008 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-22 18:11:47,008 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-22 18:11:47,008 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-22 18:11:47,008 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-22 18:11:47,008 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-22 18:11:47,009 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:47,010 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:47,020 DEBUG [RS:2;jenkins-hbase4:37903] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/oldWALs 2023-07-22 18:11:47,021 INFO [RS:2;jenkins-hbase4:37903] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37903%2C1690049504159:(num 1690049504743) 2023-07-22 18:11:47,021 DEBUG [RS:2;jenkins-hbase4:37903] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:47,021 INFO [RS:2;jenkins-hbase4:37903] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:47,025 INFO [RS:2;jenkins-hbase4:37903] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-22 18:11:47,026 INFO [RS:2;jenkins-hbase4:37903] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 18:11:47,026 INFO [RS:2;jenkins-hbase4:37903] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 18:11:47,026 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 18:11:47,026 INFO [RS:2;jenkins-hbase4:37903] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 18:11:47,027 INFO [RS:2;jenkins-hbase4:37903] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37903 2023-07-22 18:11:47,043 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/namespace/e87f08c7dab61bcb04e87b6fa049aaa1/.tmp/info/a846d0a0fbc741e3a6d19e1d2d0d55f6 2023-07-22 18:11:47,049 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a846d0a0fbc741e3a6d19e1d2d0d55f6 2023-07-22 18:11:47,050 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/namespace/e87f08c7dab61bcb04e87b6fa049aaa1/.tmp/info/a846d0a0fbc741e3a6d19e1d2d0d55f6 as hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/namespace/e87f08c7dab61bcb04e87b6fa049aaa1/info/a846d0a0fbc741e3a6d19e1d2d0d55f6 2023-07-22 18:11:47,055 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a846d0a0fbc741e3a6d19e1d2d0d55f6 2023-07-22 18:11:47,056 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/namespace/e87f08c7dab61bcb04e87b6fa049aaa1/info/a846d0a0fbc741e3a6d19e1d2d0d55f6, entries=3, sequenceid=8, filesize=5.0 K 2023-07-22 18:11:47,058 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for e87f08c7dab61bcb04e87b6fa049aaa1 in 59ms, sequenceid=8, compaction requested=false 2023-07-22 18:11:47,058 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-22 18:11:47,067 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/rsgroup/38e1416df82f316bbcffb47b6cb1b5aa/.tmp/m/55935cfd8bc84ded9a3b3df37027e442 2023-07-22 18:11:47,078 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/.tmp/info/5359fecf9ef0451eac895243bc2e95f2 2023-07-22 18:11:47,082 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/rsgroup/38e1416df82f316bbcffb47b6cb1b5aa/.tmp/m/55935cfd8bc84ded9a3b3df37027e442 as hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/rsgroup/38e1416df82f316bbcffb47b6cb1b5aa/m/55935cfd8bc84ded9a3b3df37027e442 2023-07-22 18:11:47,084 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/namespace/e87f08c7dab61bcb04e87b6fa049aaa1/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-22 18:11:47,085 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1. 2023-07-22 18:11:47,085 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for e87f08c7dab61bcb04e87b6fa049aaa1: 2023-07-22 18:11:47,085 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690049504976.e87f08c7dab61bcb04e87b6fa049aaa1. 2023-07-22 18:11:47,087 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5359fecf9ef0451eac895243bc2e95f2 2023-07-22 18:11:47,089 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/rsgroup/38e1416df82f316bbcffb47b6cb1b5aa/m/55935cfd8bc84ded9a3b3df37027e442, entries=1, sequenceid=7, filesize=4.9 K 2023-07-22 18:11:47,090 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for 38e1416df82f316bbcffb47b6cb1b5aa in 83ms, sequenceid=7, compaction requested=false 2023-07-22 18:11:47,090 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-22 18:11:47,096 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/rsgroup/38e1416df82f316bbcffb47b6cb1b5aa/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-22 18:11:47,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 18:11:47,097 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa. 2023-07-22 18:11:47,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 38e1416df82f316bbcffb47b6cb1b5aa: 2023-07-22 18:11:47,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690049505085.38e1416df82f316bbcffb47b6cb1b5aa. 2023-07-22 18:11:47,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1cb5a49f5a3118e10cdef510ff05b8d3, disabling compactions & flushes 2023-07-22 18:11:47,097 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3. 2023-07-22 18:11:47,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3. 2023-07-22 18:11:47,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3. after waiting 0 ms 2023-07-22 18:11:47,097 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3. 2023-07-22 18:11:47,102 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/.tmp/rep_barrier/29f513fe37554df8846f86a5ffef3c8b 2023-07-22 18:11:47,102 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/quota/1cb5a49f5a3118e10cdef510ff05b8d3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:47,104 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3. 2023-07-22 18:11:47,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1cb5a49f5a3118e10cdef510ff05b8d3: 2023-07-22 18:11:47,104 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1690049505427.1cb5a49f5a3118e10cdef510ff05b8d3. 2023-07-22 18:11:47,109 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 29f513fe37554df8846f86a5ffef3c8b 2023-07-22 18:11:47,110 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:37903-0x1018e3b64600003, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37903,1690049504159 2023-07-22 18:11:47,110 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:47,110 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:37903-0x1018e3b64600003, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:47,110 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:39145-0x1018e3b64600001, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37903,1690049504159 2023-07-22 18:11:47,110 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:39103-0x1018e3b64600002, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37903,1690049504159 2023-07-22 18:11:47,110 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:39103-0x1018e3b64600002, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:47,110 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:39145-0x1018e3b64600001, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:47,111 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37903,1690049504159] 2023-07-22 18:11:47,111 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37903,1690049504159; numProcessing=1 2023-07-22 18:11:47,113 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37903,1690049504159 already deleted, retry=false 2023-07-22 18:11:47,113 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37903,1690049504159 expired; onlineServers=2 2023-07-22 18:11:47,120 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/.tmp/table/9e4b9bd4f7bb45a0946792258d9d6518 2023-07-22 18:11:47,125 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9e4b9bd4f7bb45a0946792258d9d6518 2023-07-22 18:11:47,125 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/.tmp/info/5359fecf9ef0451eac895243bc2e95f2 as hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/info/5359fecf9ef0451eac895243bc2e95f2 2023-07-22 18:11:47,130 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 5359fecf9ef0451eac895243bc2e95f2 2023-07-22 18:11:47,130 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/info/5359fecf9ef0451eac895243bc2e95f2, entries=32, sequenceid=31, filesize=8.5 K 2023-07-22 18:11:47,131 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/.tmp/rep_barrier/29f513fe37554df8846f86a5ffef3c8b as hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/rep_barrier/29f513fe37554df8846f86a5ffef3c8b 2023-07-22 18:11:47,137 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 29f513fe37554df8846f86a5ffef3c8b 2023-07-22 18:11:47,137 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/rep_barrier/29f513fe37554df8846f86a5ffef3c8b, entries=1, sequenceid=31, filesize=4.9 K 2023-07-22 18:11:47,138 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/.tmp/table/9e4b9bd4f7bb45a0946792258d9d6518 as hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/table/9e4b9bd4f7bb45a0946792258d9d6518 2023-07-22 18:11:47,144 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9e4b9bd4f7bb45a0946792258d9d6518 2023-07-22 18:11:47,144 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/table/9e4b9bd4f7bb45a0946792258d9d6518, entries=8, sequenceid=31, filesize=5.2 K 2023-07-22 18:11:47,144 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 136ms, sequenceid=31, compaction requested=false 2023-07-22 18:11:47,145 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-22 18:11:47,153 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-22 18:11:47,154 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 18:11:47,154 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-22 18:11:47,154 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-22 18:11:47,154 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-22 18:11:47,206 INFO [RS:1;jenkins-hbase4:39103] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39103,1690049504000; all regions closed. 2023-07-22 18:11:47,206 DEBUG [RS:1;jenkins-hbase4:39103] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-22 18:11:47,206 INFO [RS:0;jenkins-hbase4:39145] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39145,1690049503818; all regions closed. 2023-07-22 18:11:47,206 DEBUG [RS:0;jenkins-hbase4:39145] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-22 18:11:47,214 DEBUG [RS:1;jenkins-hbase4:39103] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/oldWALs 2023-07-22 18:11:47,214 INFO [RS:1;jenkins-hbase4:39103] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39103%2C1690049504000.meta:.meta(num 1690049504917) 2023-07-22 18:11:47,216 DEBUG [RS:0;jenkins-hbase4:39145] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/oldWALs 2023-07-22 18:11:47,216 INFO [RS:0;jenkins-hbase4:39145] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39145%2C1690049503818:(num 1690049504743) 2023-07-22 18:11:47,217 DEBUG [RS:0;jenkins-hbase4:39145] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:47,217 INFO [RS:0;jenkins-hbase4:39145] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:47,217 INFO [RS:0;jenkins-hbase4:39145] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-22 18:11:47,217 INFO [RS:0;jenkins-hbase4:39145] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 18:11:47,217 INFO [RS:0;jenkins-hbase4:39145] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 18:11:47,217 INFO [RS:0;jenkins-hbase4:39145] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 18:11:47,218 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 18:11:47,219 INFO [RS:0;jenkins-hbase4:39145] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39145 2023-07-22 18:11:47,222 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:39103-0x1018e3b64600002, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39145,1690049503818 2023-07-22 18:11:47,222 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:39145-0x1018e3b64600001, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39145,1690049503818 2023-07-22 18:11:47,222 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:47,222 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39145,1690049503818] 2023-07-22 18:11:47,223 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39145,1690049503818; numProcessing=2 2023-07-22 18:11:47,223 DEBUG [RS:1;jenkins-hbase4:39103] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/oldWALs 2023-07-22 18:11:47,223 INFO [RS:1;jenkins-hbase4:39103] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39103%2C1690049504000:(num 1690049504741) 2023-07-22 18:11:47,223 DEBUG [RS:1;jenkins-hbase4:39103] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:47,223 INFO [RS:1;jenkins-hbase4:39103] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:47,224 INFO [RS:1;jenkins-hbase4:39103] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-22 18:11:47,224 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 18:11:47,224 INFO [RS:1;jenkins-hbase4:39103] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39103 2023-07-22 18:11:47,227 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39145,1690049503818 already deleted, retry=false 2023-07-22 18:11:47,227 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39145,1690049503818 expired; onlineServers=1 2023-07-22 18:11:47,227 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:39103-0x1018e3b64600002, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39103,1690049504000 2023-07-22 18:11:47,227 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:47,228 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39103,1690049504000] 2023-07-22 18:11:47,228 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39103,1690049504000; numProcessing=3 2023-07-22 18:11:47,229 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39103,1690049504000 already deleted, retry=false 2023-07-22 18:11:47,229 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39103,1690049504000 expired; onlineServers=0 2023-07-22 18:11:47,229 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33193,1690049503617' ***** 2023-07-22 18:11:47,229 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-22 18:11:47,230 DEBUG [M:0;jenkins-hbase4:33193] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@66b2f5e6, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 18:11:47,230 INFO [M:0;jenkins-hbase4:33193] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 18:11:47,232 INFO [M:0;jenkins-hbase4:33193] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4cf8ce7b{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-22 18:11:47,232 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-22 18:11:47,232 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:47,232 INFO [M:0;jenkins-hbase4:33193] server.AbstractConnector(383): Stopped ServerConnector@15f599c4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 18:11:47,232 INFO [M:0;jenkins-hbase4:33193] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 18:11:47,232 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 18:11:47,232 INFO [M:0;jenkins-hbase4:33193] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@46d38481{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 18:11:47,232 INFO [M:0;jenkins-hbase4:33193] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6cc3484e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/hadoop.log.dir/,STOPPED} 2023-07-22 18:11:47,233 INFO [M:0;jenkins-hbase4:33193] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33193,1690049503617 2023-07-22 18:11:47,233 INFO [M:0;jenkins-hbase4:33193] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33193,1690049503617; all regions closed. 2023-07-22 18:11:47,233 DEBUG [M:0;jenkins-hbase4:33193] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:47,233 INFO [M:0;jenkins-hbase4:33193] master.HMaster(1491): Stopping master jetty server 2023-07-22 18:11:47,233 INFO [M:0;jenkins-hbase4:33193] server.AbstractConnector(383): Stopped ServerConnector@3878e383{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 18:11:47,234 DEBUG [M:0;jenkins-hbase4:33193] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-22 18:11:47,234 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-22 18:11:47,234 DEBUG [M:0;jenkins-hbase4:33193] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-22 18:11:47,234 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690049504487] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690049504487,5,FailOnTimeoutGroup] 2023-07-22 18:11:47,234 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690049504487] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690049504487,5,FailOnTimeoutGroup] 2023-07-22 18:11:47,234 INFO [M:0;jenkins-hbase4:33193] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-22 18:11:47,235 INFO [M:0;jenkins-hbase4:33193] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-22 18:11:47,235 INFO [M:0;jenkins-hbase4:33193] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-22 18:11:47,235 DEBUG [M:0;jenkins-hbase4:33193] master.HMaster(1512): Stopping service threads 2023-07-22 18:11:47,235 INFO [M:0;jenkins-hbase4:33193] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-22 18:11:47,236 ERROR [M:0;jenkins-hbase4:33193] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-22 18:11:47,236 INFO [M:0;jenkins-hbase4:33193] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-22 18:11:47,236 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-22 18:11:47,236 DEBUG [M:0;jenkins-hbase4:33193] zookeeper.ZKUtil(398): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-22 18:11:47,237 WARN [M:0;jenkins-hbase4:33193] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-22 18:11:47,237 INFO [M:0;jenkins-hbase4:33193] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-22 18:11:47,237 INFO [M:0;jenkins-hbase4:33193] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-22 18:11:47,237 DEBUG [M:0;jenkins-hbase4:33193] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-22 18:11:47,237 INFO [M:0;jenkins-hbase4:33193] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:47,237 DEBUG [M:0;jenkins-hbase4:33193] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:47,237 DEBUG [M:0;jenkins-hbase4:33193] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-22 18:11:47,237 DEBUG [M:0;jenkins-hbase4:33193] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:47,237 INFO [M:0;jenkins-hbase4:33193] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=92.98 KB heapSize=109.13 KB 2023-07-22 18:11:47,249 INFO [M:0;jenkins-hbase4:33193] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=92.98 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/0856d5e9134741aca8afe71a3ca02e55 2023-07-22 18:11:47,254 DEBUG [M:0;jenkins-hbase4:33193] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/0856d5e9134741aca8afe71a3ca02e55 as hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0856d5e9134741aca8afe71a3ca02e55 2023-07-22 18:11:47,259 INFO [M:0;jenkins-hbase4:33193] regionserver.HStore(1080): Added hdfs://localhost:43261/user/jenkins/test-data/e32b54ab-103b-2518-e9a4-d969357b73fe/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/0856d5e9134741aca8afe71a3ca02e55, entries=24, sequenceid=194, filesize=12.4 K 2023-07-22 18:11:47,260 INFO [M:0;jenkins-hbase4:33193] regionserver.HRegion(2948): Finished flush of dataSize ~92.98 KB/95216, heapSize ~109.12 KB/111736, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 23ms, sequenceid=194, compaction requested=false 2023-07-22 18:11:47,262 INFO [M:0;jenkins-hbase4:33193] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:47,262 DEBUG [M:0;jenkins-hbase4:33193] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-22 18:11:47,266 INFO [M:0;jenkins-hbase4:33193] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-22 18:11:47,266 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 18:11:47,266 INFO [M:0;jenkins-hbase4:33193] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33193 2023-07-22 18:11:47,268 DEBUG [M:0;jenkins-hbase4:33193] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,33193,1690049503617 already deleted, retry=false 2023-07-22 18:11:47,472 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:47,473 INFO [M:0;jenkins-hbase4:33193] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33193,1690049503617; zookeeper connection closed. 2023-07-22 18:11:47,473 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): master:33193-0x1018e3b64600000, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:47,573 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:39103-0x1018e3b64600002, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:47,573 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:39103-0x1018e3b64600002, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:47,573 INFO [RS:1;jenkins-hbase4:39103] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39103,1690049504000; zookeeper connection closed. 2023-07-22 18:11:47,575 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@15be3d41] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@15be3d41 2023-07-22 18:11:47,673 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:39145-0x1018e3b64600001, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:47,673 INFO [RS:0;jenkins-hbase4:39145] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39145,1690049503818; zookeeper connection closed. 2023-07-22 18:11:47,673 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:39145-0x1018e3b64600001, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:47,674 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5c518697] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5c518697 2023-07-22 18:11:47,773 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:37903-0x1018e3b64600003, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:47,773 INFO [RS:2;jenkins-hbase4:37903] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37903,1690049504159; zookeeper connection closed. 2023-07-22 18:11:47,774 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): regionserver:37903-0x1018e3b64600003, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:47,774 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@67cee2c7] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@67cee2c7 2023-07-22 18:11:47,774 INFO [Listener at localhost/38083] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-22 18:11:47,774 WARN [Listener at localhost/38083] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-22 18:11:47,778 INFO [Listener at localhost/38083] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 18:11:47,883 WARN [BP-1975014628-172.31.14.131-1690049502765 heartbeating to localhost/127.0.0.1:43261] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-22 18:11:47,885 WARN [BP-1975014628-172.31.14.131-1690049502765 heartbeating to localhost/127.0.0.1:43261] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1975014628-172.31.14.131-1690049502765 (Datanode Uuid c1184ebc-8272-4308-b990-3d2cd1f0f589) service to localhost/127.0.0.1:43261 2023-07-22 18:11:47,885 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/cluster_8cebeb0e-c207-07ae-6c26-550c35d41020/dfs/data/data5/current/BP-1975014628-172.31.14.131-1690049502765] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 18:11:47,886 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/cluster_8cebeb0e-c207-07ae-6c26-550c35d41020/dfs/data/data6/current/BP-1975014628-172.31.14.131-1690049502765] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 18:11:47,889 WARN [Listener at localhost/38083] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-22 18:11:47,892 INFO [Listener at localhost/38083] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 18:11:47,997 WARN [BP-1975014628-172.31.14.131-1690049502765 heartbeating to localhost/127.0.0.1:43261] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-22 18:11:47,998 WARN [BP-1975014628-172.31.14.131-1690049502765 heartbeating to localhost/127.0.0.1:43261] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1975014628-172.31.14.131-1690049502765 (Datanode Uuid 855b84be-30cd-4b88-9095-222559fb9481) service to localhost/127.0.0.1:43261 2023-07-22 18:11:47,999 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/cluster_8cebeb0e-c207-07ae-6c26-550c35d41020/dfs/data/data3/current/BP-1975014628-172.31.14.131-1690049502765] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 18:11:47,999 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/cluster_8cebeb0e-c207-07ae-6c26-550c35d41020/dfs/data/data4/current/BP-1975014628-172.31.14.131-1690049502765] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 18:11:48,000 WARN [Listener at localhost/38083] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-22 18:11:48,003 INFO [Listener at localhost/38083] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 18:11:48,105 WARN [BP-1975014628-172.31.14.131-1690049502765 heartbeating to localhost/127.0.0.1:43261] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-22 18:11:48,105 WARN [BP-1975014628-172.31.14.131-1690049502765 heartbeating to localhost/127.0.0.1:43261] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1975014628-172.31.14.131-1690049502765 (Datanode Uuid ea0320f5-eaa6-4105-9fc1-afdcf98a7b06) service to localhost/127.0.0.1:43261 2023-07-22 18:11:48,106 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/cluster_8cebeb0e-c207-07ae-6c26-550c35d41020/dfs/data/data1/current/BP-1975014628-172.31.14.131-1690049502765] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 18:11:48,107 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/cluster_8cebeb0e-c207-07ae-6c26-550c35d41020/dfs/data/data2/current/BP-1975014628-172.31.14.131-1690049502765] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 18:11:48,116 INFO [Listener at localhost/38083] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 18:11:48,231 INFO [Listener at localhost/38083] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-22 18:11:48,261 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-22 18:11:48,261 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-22 18:11:48,261 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/hadoop.log.dir so I do NOT create it in target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74 2023-07-22 18:11:48,261 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/b751bc4b-eb72-770d-4bc9-305eb9701a78/hadoop.tmp.dir so I do NOT create it in target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74 2023-07-22 18:11:48,262 INFO [Listener at localhost/38083] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7, deleteOnExit=true 2023-07-22 18:11:48,262 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-22 18:11:48,262 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/test.cache.data in system properties and HBase conf 2023-07-22 18:11:48,262 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/hadoop.tmp.dir in system properties and HBase conf 2023-07-22 18:11:48,262 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/hadoop.log.dir in system properties and HBase conf 2023-07-22 18:11:48,262 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-22 18:11:48,262 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-22 18:11:48,262 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-22 18:11:48,262 DEBUG [Listener at localhost/38083] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-22 18:11:48,263 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-22 18:11:48,263 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-22 18:11:48,263 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-22 18:11:48,263 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-22 18:11:48,263 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-22 18:11:48,263 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-22 18:11:48,263 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-22 18:11:48,263 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-22 18:11:48,263 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-22 18:11:48,263 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/nfs.dump.dir in system properties and HBase conf 2023-07-22 18:11:48,263 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/java.io.tmpdir in system properties and HBase conf 2023-07-22 18:11:48,264 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-22 18:11:48,264 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-22 18:11:48,264 INFO [Listener at localhost/38083] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-22 18:11:48,268 WARN [Listener at localhost/38083] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-22 18:11:48,268 WARN [Listener at localhost/38083] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-22 18:11:48,315 WARN [Listener at localhost/38083] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 18:11:48,317 INFO [Listener at localhost/38083] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 18:11:48,323 INFO [Listener at localhost/38083] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/java.io.tmpdir/Jetty_localhost_35165_hdfs____3gpu48/webapp 2023-07-22 18:11:48,329 DEBUG [Listener at localhost/38083-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1018e3b6460000a, quorum=127.0.0.1:56348, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-22 18:11:48,329 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1018e3b6460000a, quorum=127.0.0.1:56348, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-22 18:11:48,417 INFO [Listener at localhost/38083] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35165 2023-07-22 18:11:48,421 WARN [Listener at localhost/38083] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-22 18:11:48,421 WARN [Listener at localhost/38083] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-22 18:11:48,464 WARN [Listener at localhost/35975] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 18:11:48,481 WARN [Listener at localhost/35975] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-22 18:11:48,483 WARN [Listener at localhost/35975] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 18:11:48,484 INFO [Listener at localhost/35975] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 18:11:48,490 INFO [Listener at localhost/35975] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/java.io.tmpdir/Jetty_localhost_33815_datanode____.9sreal/webapp 2023-07-22 18:11:48,583 INFO [Listener at localhost/35975] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33815 2023-07-22 18:11:48,589 WARN [Listener at localhost/34273] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 18:11:48,608 WARN [Listener at localhost/34273] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-22 18:11:48,610 WARN [Listener at localhost/34273] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 18:11:48,611 INFO [Listener at localhost/34273] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 18:11:48,614 INFO [Listener at localhost/34273] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/java.io.tmpdir/Jetty_localhost_44285_datanode____940u52/webapp 2023-07-22 18:11:48,698 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdf871a6ebb92b2cc: Processing first storage report for DS-333fb974-6b27-4140-a367-0fecceaaf284 from datanode c1b456db-f2c0-4c0d-bd3d-e694123a8179 2023-07-22 18:11:48,698 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdf871a6ebb92b2cc: from storage DS-333fb974-6b27-4140-a367-0fecceaaf284 node DatanodeRegistration(127.0.0.1:33395, datanodeUuid=c1b456db-f2c0-4c0d-bd3d-e694123a8179, infoPort=44821, infoSecurePort=0, ipcPort=34273, storageInfo=lv=-57;cid=testClusterID;nsid=1653902116;c=1690049508270), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 18:11:48,698 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xdf871a6ebb92b2cc: Processing first storage report for DS-e4368ae3-031b-4e59-91c5-578a06c58b95 from datanode c1b456db-f2c0-4c0d-bd3d-e694123a8179 2023-07-22 18:11:48,698 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xdf871a6ebb92b2cc: from storage DS-e4368ae3-031b-4e59-91c5-578a06c58b95 node DatanodeRegistration(127.0.0.1:33395, datanodeUuid=c1b456db-f2c0-4c0d-bd3d-e694123a8179, infoPort=44821, infoSecurePort=0, ipcPort=34273, storageInfo=lv=-57;cid=testClusterID;nsid=1653902116;c=1690049508270), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 18:11:48,719 INFO [Listener at localhost/34273] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44285 2023-07-22 18:11:48,725 WARN [Listener at localhost/40541] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 18:11:48,742 WARN [Listener at localhost/40541] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-22 18:11:48,744 WARN [Listener at localhost/40541] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-22 18:11:48,745 INFO [Listener at localhost/40541] log.Slf4jLog(67): jetty-6.1.26 2023-07-22 18:11:48,748 INFO [Listener at localhost/40541] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/java.io.tmpdir/Jetty_localhost_39359_datanode____.ovorl4/webapp 2023-07-22 18:11:48,830 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9ae209fad8df419: Processing first storage report for DS-4070490d-8f75-4434-8c16-7e4ba6b38225 from datanode bcbfa0d3-c776-46ee-9f2e-48b9bbb352cb 2023-07-22 18:11:48,830 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9ae209fad8df419: from storage DS-4070490d-8f75-4434-8c16-7e4ba6b38225 node DatanodeRegistration(127.0.0.1:38427, datanodeUuid=bcbfa0d3-c776-46ee-9f2e-48b9bbb352cb, infoPort=46147, infoSecurePort=0, ipcPort=40541, storageInfo=lv=-57;cid=testClusterID;nsid=1653902116;c=1690049508270), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 18:11:48,830 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x9ae209fad8df419: Processing first storage report for DS-1077d0a5-cb2f-40df-899f-1d9d5d1ea614 from datanode bcbfa0d3-c776-46ee-9f2e-48b9bbb352cb 2023-07-22 18:11:48,830 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x9ae209fad8df419: from storage DS-1077d0a5-cb2f-40df-899f-1d9d5d1ea614 node DatanodeRegistration(127.0.0.1:38427, datanodeUuid=bcbfa0d3-c776-46ee-9f2e-48b9bbb352cb, infoPort=46147, infoSecurePort=0, ipcPort=40541, storageInfo=lv=-57;cid=testClusterID;nsid=1653902116;c=1690049508270), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 18:11:48,847 INFO [Listener at localhost/40541] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39359 2023-07-22 18:11:48,856 WARN [Listener at localhost/32999] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-22 18:11:48,945 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x566a45e8a5f138e7: Processing first storage report for DS-43aa2c87-fda9-43ec-ad54-5f65f9bef8cb from datanode c945822f-8771-4245-9867-0c708d2a36c4 2023-07-22 18:11:48,945 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x566a45e8a5f138e7: from storage DS-43aa2c87-fda9-43ec-ad54-5f65f9bef8cb node DatanodeRegistration(127.0.0.1:43689, datanodeUuid=c945822f-8771-4245-9867-0c708d2a36c4, infoPort=40475, infoSecurePort=0, ipcPort=32999, storageInfo=lv=-57;cid=testClusterID;nsid=1653902116;c=1690049508270), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 18:11:48,945 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x566a45e8a5f138e7: Processing first storage report for DS-8139b4cc-404d-44c4-a93c-57c9a6c240b1 from datanode c945822f-8771-4245-9867-0c708d2a36c4 2023-07-22 18:11:48,945 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x566a45e8a5f138e7: from storage DS-8139b4cc-404d-44c4-a93c-57c9a6c240b1 node DatanodeRegistration(127.0.0.1:43689, datanodeUuid=c945822f-8771-4245-9867-0c708d2a36c4, infoPort=40475, infoSecurePort=0, ipcPort=32999, storageInfo=lv=-57;cid=testClusterID;nsid=1653902116;c=1690049508270), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-22 18:11:48,966 DEBUG [Listener at localhost/32999] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74 2023-07-22 18:11:48,968 INFO [Listener at localhost/32999] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/zookeeper_0, clientPort=64378, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-22 18:11:48,969 INFO [Listener at localhost/32999] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=64378 2023-07-22 18:11:48,969 INFO [Listener at localhost/32999] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:48,970 INFO [Listener at localhost/32999] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:48,985 INFO [Listener at localhost/32999] util.FSUtils(471): Created version file at hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e with version=8 2023-07-22 18:11:48,985 INFO [Listener at localhost/32999] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:43335/user/jenkins/test-data/a95bc55d-9aa0-a2a0-a7a9-a94c7a5b1eaf/hbase-staging 2023-07-22 18:11:48,986 DEBUG [Listener at localhost/32999] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-22 18:11:48,986 DEBUG [Listener at localhost/32999] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-22 18:11:48,986 DEBUG [Listener at localhost/32999] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-22 18:11:48,986 DEBUG [Listener at localhost/32999] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-22 18:11:48,987 INFO [Listener at localhost/32999] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 18:11:48,987 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:48,987 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:48,987 INFO [Listener at localhost/32999] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 18:11:48,987 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:48,987 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 18:11:48,987 INFO [Listener at localhost/32999] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 18:11:48,988 INFO [Listener at localhost/32999] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44283 2023-07-22 18:11:48,988 INFO [Listener at localhost/32999] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:48,989 INFO [Listener at localhost/32999] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:48,990 INFO [Listener at localhost/32999] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44283 connecting to ZooKeeper ensemble=127.0.0.1:64378 2023-07-22 18:11:48,999 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:442830x0, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 18:11:49,000 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44283-0x1018e3b796b0000 connected 2023-07-22 18:11:49,014 DEBUG [Listener at localhost/32999] zookeeper.ZKUtil(164): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 18:11:49,015 DEBUG [Listener at localhost/32999] zookeeper.ZKUtil(164): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:49,015 DEBUG [Listener at localhost/32999] zookeeper.ZKUtil(164): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 18:11:49,015 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44283 2023-07-22 18:11:49,018 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44283 2023-07-22 18:11:49,018 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44283 2023-07-22 18:11:49,021 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44283 2023-07-22 18:11:49,022 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44283 2023-07-22 18:11:49,023 INFO [Listener at localhost/32999] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 18:11:49,024 INFO [Listener at localhost/32999] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 18:11:49,024 INFO [Listener at localhost/32999] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 18:11:49,024 INFO [Listener at localhost/32999] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-22 18:11:49,024 INFO [Listener at localhost/32999] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 18:11:49,024 INFO [Listener at localhost/32999] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 18:11:49,024 INFO [Listener at localhost/32999] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 18:11:49,025 INFO [Listener at localhost/32999] http.HttpServer(1146): Jetty bound to port 44621 2023-07-22 18:11:49,025 INFO [Listener at localhost/32999] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 18:11:49,026 INFO [Listener at localhost/32999] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:49,026 INFO [Listener at localhost/32999] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4e484fe9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/hadoop.log.dir/,AVAILABLE} 2023-07-22 18:11:49,027 INFO [Listener at localhost/32999] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:49,027 INFO [Listener at localhost/32999] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@64e35faa{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 18:11:49,141 INFO [Listener at localhost/32999] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 18:11:49,142 INFO [Listener at localhost/32999] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 18:11:49,142 INFO [Listener at localhost/32999] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 18:11:49,142 INFO [Listener at localhost/32999] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-22 18:11:49,143 INFO [Listener at localhost/32999] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:49,144 INFO [Listener at localhost/32999] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@63887653{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/java.io.tmpdir/jetty-0_0_0_0-44621-hbase-server-2_4_18-SNAPSHOT_jar-_-any-1417885600034629309/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-22 18:11:49,145 INFO [Listener at localhost/32999] server.AbstractConnector(333): Started ServerConnector@79271683{HTTP/1.1, (http/1.1)}{0.0.0.0:44621} 2023-07-22 18:11:49,146 INFO [Listener at localhost/32999] server.Server(415): Started @43098ms 2023-07-22 18:11:49,146 INFO [Listener at localhost/32999] master.HMaster(444): hbase.rootdir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e, hbase.cluster.distributed=false 2023-07-22 18:11:49,158 INFO [Listener at localhost/32999] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 18:11:49,158 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:49,159 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:49,159 INFO [Listener at localhost/32999] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 18:11:49,159 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:49,159 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 18:11:49,159 INFO [Listener at localhost/32999] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 18:11:49,159 INFO [Listener at localhost/32999] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38645 2023-07-22 18:11:49,160 INFO [Listener at localhost/32999] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 18:11:49,161 DEBUG [Listener at localhost/32999] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 18:11:49,161 INFO [Listener at localhost/32999] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:49,162 INFO [Listener at localhost/32999] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:49,163 INFO [Listener at localhost/32999] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38645 connecting to ZooKeeper ensemble=127.0.0.1:64378 2023-07-22 18:11:49,168 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:386450x0, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 18:11:49,169 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38645-0x1018e3b796b0001 connected 2023-07-22 18:11:49,169 DEBUG [Listener at localhost/32999] zookeeper.ZKUtil(164): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 18:11:49,170 DEBUG [Listener at localhost/32999] zookeeper.ZKUtil(164): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:49,170 DEBUG [Listener at localhost/32999] zookeeper.ZKUtil(164): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 18:11:49,171 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38645 2023-07-22 18:11:49,171 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38645 2023-07-22 18:11:49,172 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38645 2023-07-22 18:11:49,173 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38645 2023-07-22 18:11:49,173 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38645 2023-07-22 18:11:49,174 INFO [Listener at localhost/32999] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 18:11:49,175 INFO [Listener at localhost/32999] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 18:11:49,175 INFO [Listener at localhost/32999] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 18:11:49,175 INFO [Listener at localhost/32999] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 18:11:49,175 INFO [Listener at localhost/32999] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 18:11:49,175 INFO [Listener at localhost/32999] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 18:11:49,175 INFO [Listener at localhost/32999] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 18:11:49,176 INFO [Listener at localhost/32999] http.HttpServer(1146): Jetty bound to port 34037 2023-07-22 18:11:49,176 INFO [Listener at localhost/32999] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 18:11:49,178 INFO [Listener at localhost/32999] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:49,179 INFO [Listener at localhost/32999] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@25e1627f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/hadoop.log.dir/,AVAILABLE} 2023-07-22 18:11:49,179 INFO [Listener at localhost/32999] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:49,179 INFO [Listener at localhost/32999] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@782404d1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 18:11:49,292 INFO [Listener at localhost/32999] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 18:11:49,292 INFO [Listener at localhost/32999] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 18:11:49,292 INFO [Listener at localhost/32999] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 18:11:49,293 INFO [Listener at localhost/32999] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-22 18:11:49,293 INFO [Listener at localhost/32999] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:49,294 INFO [Listener at localhost/32999] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4c6e43be{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/java.io.tmpdir/jetty-0_0_0_0-34037-hbase-server-2_4_18-SNAPSHOT_jar-_-any-308730126583525534/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:49,295 INFO [Listener at localhost/32999] server.AbstractConnector(333): Started ServerConnector@25e91bdf{HTTP/1.1, (http/1.1)}{0.0.0.0:34037} 2023-07-22 18:11:49,295 INFO [Listener at localhost/32999] server.Server(415): Started @43248ms 2023-07-22 18:11:49,307 INFO [Listener at localhost/32999] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 18:11:49,307 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:49,307 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:49,307 INFO [Listener at localhost/32999] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 18:11:49,307 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:49,307 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 18:11:49,308 INFO [Listener at localhost/32999] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 18:11:49,310 INFO [Listener at localhost/32999] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46437 2023-07-22 18:11:49,310 INFO [Listener at localhost/32999] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 18:11:49,311 DEBUG [Listener at localhost/32999] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 18:11:49,312 INFO [Listener at localhost/32999] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:49,313 INFO [Listener at localhost/32999] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:49,313 INFO [Listener at localhost/32999] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46437 connecting to ZooKeeper ensemble=127.0.0.1:64378 2023-07-22 18:11:49,317 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:464370x0, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 18:11:49,319 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46437-0x1018e3b796b0002 connected 2023-07-22 18:11:49,319 DEBUG [Listener at localhost/32999] zookeeper.ZKUtil(164): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 18:11:49,319 DEBUG [Listener at localhost/32999] zookeeper.ZKUtil(164): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:49,320 DEBUG [Listener at localhost/32999] zookeeper.ZKUtil(164): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 18:11:49,320 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46437 2023-07-22 18:11:49,320 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46437 2023-07-22 18:11:49,322 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46437 2023-07-22 18:11:49,324 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46437 2023-07-22 18:11:49,325 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46437 2023-07-22 18:11:49,326 INFO [Listener at localhost/32999] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 18:11:49,326 INFO [Listener at localhost/32999] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 18:11:49,327 INFO [Listener at localhost/32999] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 18:11:49,327 INFO [Listener at localhost/32999] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 18:11:49,327 INFO [Listener at localhost/32999] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 18:11:49,327 INFO [Listener at localhost/32999] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 18:11:49,327 INFO [Listener at localhost/32999] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 18:11:49,328 INFO [Listener at localhost/32999] http.HttpServer(1146): Jetty bound to port 37849 2023-07-22 18:11:49,328 INFO [Listener at localhost/32999] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 18:11:49,333 INFO [Listener at localhost/32999] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:49,333 INFO [Listener at localhost/32999] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@69348f08{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/hadoop.log.dir/,AVAILABLE} 2023-07-22 18:11:49,333 INFO [Listener at localhost/32999] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:49,333 INFO [Listener at localhost/32999] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1989f106{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 18:11:49,445 INFO [Listener at localhost/32999] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 18:11:49,446 INFO [Listener at localhost/32999] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 18:11:49,446 INFO [Listener at localhost/32999] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 18:11:49,446 INFO [Listener at localhost/32999] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-22 18:11:49,447 INFO [Listener at localhost/32999] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:49,448 INFO [Listener at localhost/32999] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@50144ee6{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/java.io.tmpdir/jetty-0_0_0_0-37849-hbase-server-2_4_18-SNAPSHOT_jar-_-any-8341298141610746304/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:49,451 INFO [Listener at localhost/32999] server.AbstractConnector(333): Started ServerConnector@46096509{HTTP/1.1, (http/1.1)}{0.0.0.0:37849} 2023-07-22 18:11:49,451 INFO [Listener at localhost/32999] server.Server(415): Started @43403ms 2023-07-22 18:11:49,463 INFO [Listener at localhost/32999] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 18:11:49,463 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:49,463 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:49,463 INFO [Listener at localhost/32999] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 18:11:49,463 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:49,463 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 18:11:49,463 INFO [Listener at localhost/32999] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 18:11:49,464 INFO [Listener at localhost/32999] ipc.NettyRpcServer(120): Bind to /172.31.14.131:38757 2023-07-22 18:11:49,464 INFO [Listener at localhost/32999] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 18:11:49,466 DEBUG [Listener at localhost/32999] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 18:11:49,466 INFO [Listener at localhost/32999] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:49,467 INFO [Listener at localhost/32999] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:49,468 INFO [Listener at localhost/32999] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:38757 connecting to ZooKeeper ensemble=127.0.0.1:64378 2023-07-22 18:11:49,471 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:387570x0, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 18:11:49,472 DEBUG [Listener at localhost/32999] zookeeper.ZKUtil(164): regionserver:387570x0, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 18:11:49,473 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:38757-0x1018e3b796b0003 connected 2023-07-22 18:11:49,473 DEBUG [Listener at localhost/32999] zookeeper.ZKUtil(164): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:49,473 DEBUG [Listener at localhost/32999] zookeeper.ZKUtil(164): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 18:11:49,474 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=38757 2023-07-22 18:11:49,474 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=38757 2023-07-22 18:11:49,474 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=38757 2023-07-22 18:11:49,475 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=38757 2023-07-22 18:11:49,475 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=38757 2023-07-22 18:11:49,476 INFO [Listener at localhost/32999] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 18:11:49,476 INFO [Listener at localhost/32999] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 18:11:49,477 INFO [Listener at localhost/32999] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 18:11:49,477 INFO [Listener at localhost/32999] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 18:11:49,477 INFO [Listener at localhost/32999] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 18:11:49,477 INFO [Listener at localhost/32999] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 18:11:49,477 INFO [Listener at localhost/32999] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 18:11:49,478 INFO [Listener at localhost/32999] http.HttpServer(1146): Jetty bound to port 35669 2023-07-22 18:11:49,478 INFO [Listener at localhost/32999] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 18:11:49,481 INFO [Listener at localhost/32999] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:49,482 INFO [Listener at localhost/32999] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1d10ced3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/hadoop.log.dir/,AVAILABLE} 2023-07-22 18:11:49,482 INFO [Listener at localhost/32999] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:49,482 INFO [Listener at localhost/32999] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@5c6daf69{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 18:11:49,599 INFO [Listener at localhost/32999] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 18:11:49,599 INFO [Listener at localhost/32999] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 18:11:49,599 INFO [Listener at localhost/32999] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 18:11:49,600 INFO [Listener at localhost/32999] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-22 18:11:49,600 INFO [Listener at localhost/32999] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:49,601 INFO [Listener at localhost/32999] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@54a596b9{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/java.io.tmpdir/jetty-0_0_0_0-35669-hbase-server-2_4_18-SNAPSHOT_jar-_-any-704761319895399675/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:49,602 INFO [Listener at localhost/32999] server.AbstractConnector(333): Started ServerConnector@12826d0e{HTTP/1.1, (http/1.1)}{0.0.0.0:35669} 2023-07-22 18:11:49,603 INFO [Listener at localhost/32999] server.Server(415): Started @43555ms 2023-07-22 18:11:49,605 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 18:11:49,608 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@5e47ccd2{HTTP/1.1, (http/1.1)}{0.0.0.0:39739} 2023-07-22 18:11:49,608 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @43560ms 2023-07-22 18:11:49,608 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,44283,1690049508986 2023-07-22 18:11:49,609 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-22 18:11:49,609 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,44283,1690049508986 2023-07-22 18:11:49,612 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 18:11:49,612 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 18:11:49,612 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 18:11:49,612 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-22 18:11:49,614 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:49,614 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-22 18:11:49,615 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-22 18:11:49,615 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,44283,1690049508986 from backup master directory 2023-07-22 18:11:49,616 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,44283,1690049508986 2023-07-22 18:11:49,616 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-22 18:11:49,616 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 18:11:49,617 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,44283,1690049508986 2023-07-22 18:11:49,633 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/hbase.id with ID: 21deeca3-acd1-4ccc-917a-1f9039f825b8 2023-07-22 18:11:49,642 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:49,645 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:49,655 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x51aab894 to 127.0.0.1:64378 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:49,662 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@36c89848, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:49,662 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:49,662 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-22 18:11:49,662 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 18:11:49,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/MasterData/data/master/store-tmp 2023-07-22 18:11:49,676 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:49,676 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-22 18:11:49,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:49,676 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:49,676 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-22 18:11:49,676 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:49,676 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:49,676 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-22 18:11:49,677 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/MasterData/WALs/jenkins-hbase4.apache.org,44283,1690049508986 2023-07-22 18:11:49,680 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44283%2C1690049508986, suffix=, logDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/MasterData/WALs/jenkins-hbase4.apache.org,44283,1690049508986, archiveDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/MasterData/oldWALs, maxLogs=10 2023-07-22 18:11:49,702 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43689,DS-43aa2c87-fda9-43ec-ad54-5f65f9bef8cb,DISK] 2023-07-22 18:11:49,706 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33395,DS-333fb974-6b27-4140-a367-0fecceaaf284,DISK] 2023-07-22 18:11:49,706 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38427,DS-4070490d-8f75-4434-8c16-7e4ba6b38225,DISK] 2023-07-22 18:11:49,709 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/MasterData/WALs/jenkins-hbase4.apache.org,44283,1690049508986/jenkins-hbase4.apache.org%2C44283%2C1690049508986.1690049509681 2023-07-22 18:11:49,710 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43689,DS-43aa2c87-fda9-43ec-ad54-5f65f9bef8cb,DISK], DatanodeInfoWithStorage[127.0.0.1:38427,DS-4070490d-8f75-4434-8c16-7e4ba6b38225,DISK], DatanodeInfoWithStorage[127.0.0.1:33395,DS-333fb974-6b27-4140-a367-0fecceaaf284,DISK]] 2023-07-22 18:11:49,710 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:49,710 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:49,710 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-22 18:11:49,710 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-22 18:11:49,711 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-22 18:11:49,713 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-22 18:11:49,713 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-22 18:11:49,714 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:49,715 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-22 18:11:49,715 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-22 18:11:49,718 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-22 18:11:49,722 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:49,722 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11177139680, jitterRate=0.040952250361442566}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:49,723 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-22 18:11:49,726 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-22 18:11:49,728 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-22 18:11:49,728 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-22 18:11:49,728 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-22 18:11:49,729 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-22 18:11:49,729 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-22 18:11:49,729 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-22 18:11:49,730 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-22 18:11:49,731 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-22 18:11:49,731 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-22 18:11:49,731 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-22 18:11:49,732 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-22 18:11:49,735 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:49,735 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-22 18:11:49,735 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-22 18:11:49,736 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-22 18:11:49,737 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:49,737 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:49,737 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:49,738 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:49,738 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:49,738 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,44283,1690049508986, sessionid=0x1018e3b796b0000, setting cluster-up flag (Was=false) 2023-07-22 18:11:49,742 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:49,747 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-22 18:11:49,748 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44283,1690049508986 2023-07-22 18:11:49,752 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:49,755 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-22 18:11:49,756 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44283,1690049508986 2023-07-22 18:11:49,757 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.hbase-snapshot/.tmp 2023-07-22 18:11:49,761 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-22 18:11:49,761 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-22 18:11:49,763 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-22 18:11:49,763 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-22 18:11:49,765 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44283,1690049508986] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 18:11:49,765 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-22 18:11:49,782 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-22 18:11:49,782 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-22 18:11:49,783 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-22 18:11:49,783 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-22 18:11:49,783 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 18:11:49,783 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 18:11:49,783 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 18:11:49,783 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-22 18:11:49,783 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-22 18:11:49,783 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,783 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 18:11:49,783 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1690049539791 2023-07-22 18:11:49,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-22 18:11:49,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-22 18:11:49,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-22 18:11:49,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-22 18:11:49,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-22 18:11:49,791 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-22 18:11:49,793 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:49,793 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-22 18:11:49,793 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-22 18:11:49,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-22 18:11:49,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-22 18:11:49,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-22 18:11:49,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-22 18:11:49,794 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-22 18:11:49,795 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:49,799 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690049509794,5,FailOnTimeoutGroup] 2023-07-22 18:11:49,799 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690049509799,5,FailOnTimeoutGroup] 2023-07-22 18:11:49,799 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:49,801 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-22 18:11:49,801 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:49,801 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:49,812 INFO [RS:0;jenkins-hbase4:38645] regionserver.HRegionServer(951): ClusterId : 21deeca3-acd1-4ccc-917a-1f9039f825b8 2023-07-22 18:11:49,812 DEBUG [RS:0;jenkins-hbase4:38645] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 18:11:49,813 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(951): ClusterId : 21deeca3-acd1-4ccc-917a-1f9039f825b8 2023-07-22 18:11:49,813 DEBUG [RS:1;jenkins-hbase4:46437] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 18:11:49,813 INFO [RS:2;jenkins-hbase4:38757] regionserver.HRegionServer(951): ClusterId : 21deeca3-acd1-4ccc-917a-1f9039f825b8 2023-07-22 18:11:49,813 DEBUG [RS:2;jenkins-hbase4:38757] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 18:11:49,817 DEBUG [RS:0;jenkins-hbase4:38645] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 18:11:49,817 DEBUG [RS:0;jenkins-hbase4:38645] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 18:11:49,817 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:49,817 DEBUG [RS:2;jenkins-hbase4:38757] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 18:11:49,817 DEBUG [RS:2;jenkins-hbase4:38757] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 18:11:49,818 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:49,818 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e 2023-07-22 18:11:49,819 DEBUG [RS:0;jenkins-hbase4:38645] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 18:11:49,821 DEBUG [RS:0;jenkins-hbase4:38645] zookeeper.ReadOnlyZKClient(139): Connect 0x665b7fd1 to 127.0.0.1:64378 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:49,823 DEBUG [RS:1;jenkins-hbase4:46437] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 18:11:49,823 DEBUG [RS:1;jenkins-hbase4:46437] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 18:11:49,826 DEBUG [RS:1;jenkins-hbase4:46437] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 18:11:49,832 DEBUG [RS:2;jenkins-hbase4:38757] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 18:11:49,834 DEBUG [RS:1;jenkins-hbase4:46437] zookeeper.ReadOnlyZKClient(139): Connect 0x6da1db12 to 127.0.0.1:64378 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:49,834 DEBUG [RS:2;jenkins-hbase4:38757] zookeeper.ReadOnlyZKClient(139): Connect 0x1caaede6 to 127.0.0.1:64378 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:49,847 DEBUG [RS:0;jenkins-hbase4:38645] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@689c9d6a, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:49,848 DEBUG [RS:0;jenkins-hbase4:38645] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6ca314a1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 18:11:49,855 DEBUG [RS:2;jenkins-hbase4:38757] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@59a87017, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:49,855 DEBUG [RS:2;jenkins-hbase4:38757] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5b4c3b82, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 18:11:49,859 DEBUG [RS:1;jenkins-hbase4:46437] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1659fe1f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:49,859 DEBUG [RS:1;jenkins-hbase4:46437] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5c867064, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 18:11:49,861 DEBUG [RS:0;jenkins-hbase4:38645] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:38645 2023-07-22 18:11:49,861 INFO [RS:0;jenkins-hbase4:38645] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 18:11:49,861 INFO [RS:0;jenkins-hbase4:38645] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 18:11:49,861 DEBUG [RS:0;jenkins-hbase4:38645] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 18:11:49,862 INFO [RS:0;jenkins-hbase4:38645] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44283,1690049508986 with isa=jenkins-hbase4.apache.org/172.31.14.131:38645, startcode=1690049509158 2023-07-22 18:11:49,862 DEBUG [RS:0;jenkins-hbase4:38645] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 18:11:49,864 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:44925, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 18:11:49,865 DEBUG [RS:2;jenkins-hbase4:38757] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:38757 2023-07-22 18:11:49,866 INFO [RS:2;jenkins-hbase4:38757] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 18:11:49,867 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44283] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38645,1690049509158 2023-07-22 18:11:49,867 INFO [RS:2;jenkins-hbase4:38757] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 18:11:49,867 DEBUG [RS:2;jenkins-hbase4:38757] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 18:11:49,867 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44283,1690049508986] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 18:11:49,871 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44283,1690049508986] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-22 18:11:49,871 DEBUG [RS:0;jenkins-hbase4:38645] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e 2023-07-22 18:11:49,871 DEBUG [RS:0;jenkins-hbase4:38645] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35975 2023-07-22 18:11:49,872 DEBUG [RS:0;jenkins-hbase4:38645] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44621 2023-07-22 18:11:49,872 INFO [RS:2;jenkins-hbase4:38757] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44283,1690049508986 with isa=jenkins-hbase4.apache.org/172.31.14.131:38757, startcode=1690049509462 2023-07-22 18:11:49,873 DEBUG [RS:2;jenkins-hbase4:38757] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 18:11:49,874 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:49,874 DEBUG [RS:0;jenkins-hbase4:38645] zookeeper.ZKUtil(162): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38645,1690049509158 2023-07-22 18:11:49,874 WARN [RS:0;jenkins-hbase4:38645] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 18:11:49,874 INFO [RS:0;jenkins-hbase4:38645] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 18:11:49,874 DEBUG [RS:0;jenkins-hbase4:38645] regionserver.HRegionServer(1948): logDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/WALs/jenkins-hbase4.apache.org,38645,1690049509158 2023-07-22 18:11:49,883 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:49,883 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:46437 2023-07-22 18:11:49,883 INFO [RS:1;jenkins-hbase4:46437] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 18:11:49,883 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38645,1690049509158] 2023-07-22 18:11:49,883 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52303, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 18:11:49,884 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44283] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,38757,1690049509462 2023-07-22 18:11:49,883 INFO [RS:1;jenkins-hbase4:46437] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 18:11:49,886 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 18:11:49,886 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44283,1690049508986] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 18:11:49,886 DEBUG [RS:2;jenkins-hbase4:38757] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e 2023-07-22 18:11:49,886 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44283,1690049508986] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-22 18:11:49,887 DEBUG [RS:2;jenkins-hbase4:38757] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35975 2023-07-22 18:11:49,887 DEBUG [RS:2;jenkins-hbase4:38757] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44621 2023-07-22 18:11:49,888 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-22 18:11:49,890 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/info 2023-07-22 18:11:49,890 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-22 18:11:49,891 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:49,891 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-22 18:11:49,894 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44283,1690049508986 with isa=jenkins-hbase4.apache.org/172.31.14.131:46437, startcode=1690049509307 2023-07-22 18:11:49,895 DEBUG [RS:1;jenkins-hbase4:46437] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 18:11:49,896 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51763, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 18:11:49,897 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44283] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:49,897 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44283,1690049508986] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 18:11:49,897 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44283,1690049508986] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-22 18:11:49,897 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e 2023-07-22 18:11:49,897 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35975 2023-07-22 18:11:49,897 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44621 2023-07-22 18:11:49,898 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:49,899 DEBUG [RS:2;jenkins-hbase4:38757] zookeeper.ZKUtil(162): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38757,1690049509462 2023-07-22 18:11:49,899 WARN [RS:2;jenkins-hbase4:38757] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 18:11:49,900 INFO [RS:2;jenkins-hbase4:38757] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 18:11:49,900 DEBUG [RS:2;jenkins-hbase4:38757] regionserver.HRegionServer(1948): logDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/WALs/jenkins-hbase4.apache.org,38757,1690049509462 2023-07-22 18:11:49,900 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,38757,1690049509462] 2023-07-22 18:11:49,900 DEBUG [RS:1;jenkins-hbase4:46437] zookeeper.ZKUtil(162): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:49,900 DEBUG [RS:0;jenkins-hbase4:38645] zookeeper.ZKUtil(162): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38645,1690049509158 2023-07-22 18:11:49,900 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46437,1690049509307] 2023-07-22 18:11:49,900 WARN [RS:1;jenkins-hbase4:46437] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 18:11:49,902 INFO [RS:1;jenkins-hbase4:46437] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 18:11:49,903 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1948): logDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/WALs/jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:49,903 DEBUG [RS:0;jenkins-hbase4:38645] zookeeper.ZKUtil(162): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:49,903 DEBUG [RS:0;jenkins-hbase4:38645] zookeeper.ZKUtil(162): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38757,1690049509462 2023-07-22 18:11:49,904 DEBUG [RS:0;jenkins-hbase4:38645] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 18:11:49,905 INFO [RS:0;jenkins-hbase4:38645] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 18:11:49,908 INFO [RS:0;jenkins-hbase4:38645] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 18:11:49,915 INFO [RS:0;jenkins-hbase4:38645] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 18:11:49,915 INFO [RS:0;jenkins-hbase4:38645] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:49,916 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/rep_barrier 2023-07-22 18:11:49,916 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-22 18:11:49,917 DEBUG [RS:1;jenkins-hbase4:46437] zookeeper.ZKUtil(162): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38645,1690049509158 2023-07-22 18:11:49,917 DEBUG [RS:1;jenkins-hbase4:46437] zookeeper.ZKUtil(162): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:49,918 DEBUG [RS:1;jenkins-hbase4:46437] zookeeper.ZKUtil(162): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38757,1690049509462 2023-07-22 18:11:49,919 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 18:11:49,919 INFO [RS:1;jenkins-hbase4:46437] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 18:11:49,925 INFO [RS:0;jenkins-hbase4:38645] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 18:11:49,927 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:49,943 INFO [RS:1;jenkins-hbase4:46437] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 18:11:49,943 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-22 18:11:49,944 DEBUG [RS:2;jenkins-hbase4:38757] zookeeper.ZKUtil(162): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38645,1690049509158 2023-07-22 18:11:49,945 DEBUG [RS:2;jenkins-hbase4:38757] zookeeper.ZKUtil(162): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:49,945 DEBUG [RS:2;jenkins-hbase4:38757] zookeeper.ZKUtil(162): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38757,1690049509462 2023-07-22 18:11:49,946 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/table 2023-07-22 18:11:49,946 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-22 18:11:49,948 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:49,948 DEBUG [RS:2;jenkins-hbase4:38757] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 18:11:49,948 INFO [RS:2;jenkins-hbase4:38757] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 18:11:49,951 INFO [RS:1;jenkins-hbase4:46437] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 18:11:49,951 INFO [RS:1;jenkins-hbase4:46437] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:49,951 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 18:11:49,956 INFO [RS:0;jenkins-hbase4:38645] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:49,956 DEBUG [RS:0;jenkins-hbase4:38645] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,956 DEBUG [RS:0;jenkins-hbase4:38645] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,957 DEBUG [RS:0;jenkins-hbase4:38645] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,958 INFO [RS:1;jenkins-hbase4:46437] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:49,958 DEBUG [RS:0;jenkins-hbase4:38645] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,959 DEBUG [RS:0;jenkins-hbase4:38645] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,959 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,959 DEBUG [RS:0;jenkins-hbase4:38645] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 18:11:49,959 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,959 DEBUG [RS:0;jenkins-hbase4:38645] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,959 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,959 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,959 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,959 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 18:11:49,959 DEBUG [RS:0;jenkins-hbase4:38645] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,959 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,959 DEBUG [RS:0;jenkins-hbase4:38645] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,959 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,959 DEBUG [RS:0;jenkins-hbase4:38645] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,959 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,960 DEBUG [RS:1;jenkins-hbase4:46437] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:49,962 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740 2023-07-22 18:11:49,970 INFO [RS:0;jenkins-hbase4:38645] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:49,971 INFO [RS:0;jenkins-hbase4:38645] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:49,971 INFO [RS:0;jenkins-hbase4:38645] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:49,975 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740 2023-07-22 18:11:49,978 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-22 18:11:49,980 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-22 18:11:49,988 INFO [RS:1;jenkins-hbase4:46437] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:49,988 INFO [RS:1;jenkins-hbase4:46437] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:49,988 INFO [RS:1;jenkins-hbase4:46437] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:50,017 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:50,020 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9726283680, jitterRate=-0.0941692441701889}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-22 18:11:50,020 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-22 18:11:50,020 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-22 18:11:50,020 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-22 18:11:50,020 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-22 18:11:50,020 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-22 18:11:50,020 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-22 18:11:50,021 INFO [RS:0;jenkins-hbase4:38645] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 18:11:50,022 INFO [RS:0;jenkins-hbase4:38645] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38645,1690049509158-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:50,028 INFO [RS:1;jenkins-hbase4:46437] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 18:11:50,028 INFO [RS:1;jenkins-hbase4:46437] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46437,1690049509307-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:50,029 INFO [RS:2;jenkins-hbase4:38757] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 18:11:50,030 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-22 18:11:50,030 INFO [RS:2;jenkins-hbase4:38757] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 18:11:50,030 INFO [RS:2;jenkins-hbase4:38757] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:50,030 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-22 18:11:50,030 INFO [RS:2;jenkins-hbase4:38757] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 18:11:50,039 INFO [RS:2;jenkins-hbase4:38757] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:50,039 DEBUG [RS:2;jenkins-hbase4:38757] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:50,039 DEBUG [RS:2;jenkins-hbase4:38757] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:50,039 DEBUG [RS:2;jenkins-hbase4:38757] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:50,039 DEBUG [RS:2;jenkins-hbase4:38757] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:50,039 DEBUG [RS:2;jenkins-hbase4:38757] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:50,039 DEBUG [RS:2;jenkins-hbase4:38757] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 18:11:50,039 DEBUG [RS:2;jenkins-hbase4:38757] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:50,039 DEBUG [RS:2;jenkins-hbase4:38757] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:50,039 DEBUG [RS:2;jenkins-hbase4:38757] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:50,039 DEBUG [RS:2;jenkins-hbase4:38757] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:50,042 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-22 18:11:50,042 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-22 18:11:50,042 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-22 18:11:50,046 INFO [RS:2;jenkins-hbase4:38757] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:50,047 INFO [RS:2;jenkins-hbase4:38757] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:50,047 INFO [RS:2;jenkins-hbase4:38757] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:50,058 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-22 18:11:50,062 INFO [RS:0;jenkins-hbase4:38645] regionserver.Replication(203): jenkins-hbase4.apache.org,38645,1690049509158 started 2023-07-22 18:11:50,062 INFO [RS:0;jenkins-hbase4:38645] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38645,1690049509158, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38645, sessionid=0x1018e3b796b0001 2023-07-22 18:11:50,068 INFO [RS:1;jenkins-hbase4:46437] regionserver.Replication(203): jenkins-hbase4.apache.org,46437,1690049509307 started 2023-07-22 18:11:50,068 DEBUG [RS:0;jenkins-hbase4:38645] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 18:11:50,068 DEBUG [RS:0;jenkins-hbase4:38645] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38645,1690049509158 2023-07-22 18:11:50,068 DEBUG [RS:0;jenkins-hbase4:38645] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38645,1690049509158' 2023-07-22 18:11:50,068 DEBUG [RS:0;jenkins-hbase4:38645] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 18:11:50,069 DEBUG [RS:0;jenkins-hbase4:38645] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 18:11:50,069 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-22 18:11:50,069 DEBUG [RS:0;jenkins-hbase4:38645] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 18:11:50,069 DEBUG [RS:0;jenkins-hbase4:38645] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 18:11:50,069 INFO [RS:2;jenkins-hbase4:38757] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 18:11:50,069 INFO [RS:2;jenkins-hbase4:38757] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,38757,1690049509462-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:50,068 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46437,1690049509307, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46437, sessionid=0x1018e3b796b0002 2023-07-22 18:11:50,069 DEBUG [RS:0;jenkins-hbase4:38645] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38645,1690049509158 2023-07-22 18:11:50,071 DEBUG [RS:1;jenkins-hbase4:46437] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 18:11:50,071 DEBUG [RS:1;jenkins-hbase4:46437] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:50,071 DEBUG [RS:1;jenkins-hbase4:46437] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46437,1690049509307' 2023-07-22 18:11:50,071 DEBUG [RS:1;jenkins-hbase4:46437] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 18:11:50,071 DEBUG [RS:0;jenkins-hbase4:38645] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38645,1690049509158' 2023-07-22 18:11:50,071 DEBUG [RS:0;jenkins-hbase4:38645] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 18:11:50,071 DEBUG [RS:0;jenkins-hbase4:38645] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 18:11:50,071 DEBUG [RS:1;jenkins-hbase4:46437] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 18:11:50,072 DEBUG [RS:0;jenkins-hbase4:38645] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 18:11:50,072 DEBUG [RS:1;jenkins-hbase4:46437] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 18:11:50,072 DEBUG [RS:1;jenkins-hbase4:46437] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 18:11:50,072 DEBUG [RS:1;jenkins-hbase4:46437] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:50,072 DEBUG [RS:1;jenkins-hbase4:46437] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46437,1690049509307' 2023-07-22 18:11:50,072 DEBUG [RS:1;jenkins-hbase4:46437] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 18:11:50,072 INFO [RS:0;jenkins-hbase4:38645] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-22 18:11:50,072 INFO [RS:0;jenkins-hbase4:38645] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-22 18:11:50,072 DEBUG [RS:1;jenkins-hbase4:46437] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 18:11:50,073 DEBUG [RS:1;jenkins-hbase4:46437] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 18:11:50,073 INFO [RS:1;jenkins-hbase4:46437] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-22 18:11:50,073 INFO [RS:1;jenkins-hbase4:46437] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-22 18:11:50,090 INFO [RS:2;jenkins-hbase4:38757] regionserver.Replication(203): jenkins-hbase4.apache.org,38757,1690049509462 started 2023-07-22 18:11:50,091 INFO [RS:2;jenkins-hbase4:38757] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,38757,1690049509462, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:38757, sessionid=0x1018e3b796b0003 2023-07-22 18:11:50,091 DEBUG [RS:2;jenkins-hbase4:38757] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 18:11:50,091 DEBUG [RS:2;jenkins-hbase4:38757] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,38757,1690049509462 2023-07-22 18:11:50,091 DEBUG [RS:2;jenkins-hbase4:38757] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38757,1690049509462' 2023-07-22 18:11:50,091 DEBUG [RS:2;jenkins-hbase4:38757] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 18:11:50,091 DEBUG [RS:2;jenkins-hbase4:38757] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 18:11:50,092 DEBUG [RS:2;jenkins-hbase4:38757] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 18:11:50,092 DEBUG [RS:2;jenkins-hbase4:38757] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 18:11:50,092 DEBUG [RS:2;jenkins-hbase4:38757] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,38757,1690049509462 2023-07-22 18:11:50,092 DEBUG [RS:2;jenkins-hbase4:38757] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,38757,1690049509462' 2023-07-22 18:11:50,092 DEBUG [RS:2;jenkins-hbase4:38757] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 18:11:50,092 DEBUG [RS:2;jenkins-hbase4:38757] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 18:11:50,093 DEBUG [RS:2;jenkins-hbase4:38757] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 18:11:50,093 INFO [RS:2;jenkins-hbase4:38757] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-22 18:11:50,093 INFO [RS:2;jenkins-hbase4:38757] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-22 18:11:50,175 INFO [RS:0;jenkins-hbase4:38645] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38645%2C1690049509158, suffix=, logDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/WALs/jenkins-hbase4.apache.org,38645,1690049509158, archiveDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/oldWALs, maxLogs=32 2023-07-22 18:11:50,176 INFO [RS:1;jenkins-hbase4:46437] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46437%2C1690049509307, suffix=, logDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/WALs/jenkins-hbase4.apache.org,46437,1690049509307, archiveDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/oldWALs, maxLogs=32 2023-07-22 18:11:50,196 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43689,DS-43aa2c87-fda9-43ec-ad54-5f65f9bef8cb,DISK] 2023-07-22 18:11:50,196 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38427,DS-4070490d-8f75-4434-8c16-7e4ba6b38225,DISK] 2023-07-22 18:11:50,196 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33395,DS-333fb974-6b27-4140-a367-0fecceaaf284,DISK] 2023-07-22 18:11:50,197 INFO [RS:2;jenkins-hbase4:38757] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38757%2C1690049509462, suffix=, logDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/WALs/jenkins-hbase4.apache.org,38757,1690049509462, archiveDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/oldWALs, maxLogs=32 2023-07-22 18:11:50,212 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43689,DS-43aa2c87-fda9-43ec-ad54-5f65f9bef8cb,DISK] 2023-07-22 18:11:50,213 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33395,DS-333fb974-6b27-4140-a367-0fecceaaf284,DISK] 2023-07-22 18:11:50,213 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38427,DS-4070490d-8f75-4434-8c16-7e4ba6b38225,DISK] 2023-07-22 18:11:50,214 INFO [RS:1;jenkins-hbase4:46437] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/WALs/jenkins-hbase4.apache.org,46437,1690049509307/jenkins-hbase4.apache.org%2C46437%2C1690049509307.1690049510176 2023-07-22 18:11:50,216 DEBUG [RS:1;jenkins-hbase4:46437] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43689,DS-43aa2c87-fda9-43ec-ad54-5f65f9bef8cb,DISK], DatanodeInfoWithStorage[127.0.0.1:33395,DS-333fb974-6b27-4140-a367-0fecceaaf284,DISK], DatanodeInfoWithStorage[127.0.0.1:38427,DS-4070490d-8f75-4434-8c16-7e4ba6b38225,DISK]] 2023-07-22 18:11:50,217 INFO [RS:0;jenkins-hbase4:38645] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/WALs/jenkins-hbase4.apache.org,38645,1690049509158/jenkins-hbase4.apache.org%2C38645%2C1690049509158.1690049510176 2023-07-22 18:11:50,218 DEBUG [RS:0;jenkins-hbase4:38645] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33395,DS-333fb974-6b27-4140-a367-0fecceaaf284,DISK], DatanodeInfoWithStorage[127.0.0.1:43689,DS-43aa2c87-fda9-43ec-ad54-5f65f9bef8cb,DISK], DatanodeInfoWithStorage[127.0.0.1:38427,DS-4070490d-8f75-4434-8c16-7e4ba6b38225,DISK]] 2023-07-22 18:11:50,219 DEBUG [jenkins-hbase4:44283] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-22 18:11:50,219 DEBUG [jenkins-hbase4:44283] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:50,219 DEBUG [jenkins-hbase4:44283] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:50,219 DEBUG [jenkins-hbase4:44283] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:50,219 DEBUG [jenkins-hbase4:44283] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:50,219 DEBUG [jenkins-hbase4:44283] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:50,224 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38757,1690049509462, state=OPENING 2023-07-22 18:11:50,226 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43689,DS-43aa2c87-fda9-43ec-ad54-5f65f9bef8cb,DISK] 2023-07-22 18:11:50,226 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38427,DS-4070490d-8f75-4434-8c16-7e4ba6b38225,DISK] 2023-07-22 18:11:50,226 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33395,DS-333fb974-6b27-4140-a367-0fecceaaf284,DISK] 2023-07-22 18:11:50,227 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-22 18:11:50,229 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:50,230 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38757,1690049509462}] 2023-07-22 18:11:50,230 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-22 18:11:50,236 INFO [RS:2;jenkins-hbase4:38757] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/WALs/jenkins-hbase4.apache.org,38757,1690049509462/jenkins-hbase4.apache.org%2C38757%2C1690049509462.1690049510198 2023-07-22 18:11:50,239 DEBUG [RS:2;jenkins-hbase4:38757] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43689,DS-43aa2c87-fda9-43ec-ad54-5f65f9bef8cb,DISK], DatanodeInfoWithStorage[127.0.0.1:33395,DS-333fb974-6b27-4140-a367-0fecceaaf284,DISK], DatanodeInfoWithStorage[127.0.0.1:38427,DS-4070490d-8f75-4434-8c16-7e4ba6b38225,DISK]] 2023-07-22 18:11:50,376 WARN [ReadOnlyZKClient-127.0.0.1:64378@0x51aab894] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-22 18:11:50,377 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44283,1690049508986] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 18:11:50,378 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50758, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 18:11:50,378 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38757] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:50758 deadline: 1690049570378, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,38757,1690049509462 2023-07-22 18:11:50,391 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,38757,1690049509462 2023-07-22 18:11:50,393 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 18:11:50,395 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50774, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 18:11:50,399 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-22 18:11:50,399 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 18:11:50,400 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C38757%2C1690049509462.meta, suffix=.meta, logDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/WALs/jenkins-hbase4.apache.org,38757,1690049509462, archiveDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/oldWALs, maxLogs=32 2023-07-22 18:11:50,414 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43689,DS-43aa2c87-fda9-43ec-ad54-5f65f9bef8cb,DISK] 2023-07-22 18:11:50,414 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38427,DS-4070490d-8f75-4434-8c16-7e4ba6b38225,DISK] 2023-07-22 18:11:50,415 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33395,DS-333fb974-6b27-4140-a367-0fecceaaf284,DISK] 2023-07-22 18:11:50,417 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/WALs/jenkins-hbase4.apache.org,38757,1690049509462/jenkins-hbase4.apache.org%2C38757%2C1690049509462.meta.1690049510401.meta 2023-07-22 18:11:50,417 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38427,DS-4070490d-8f75-4434-8c16-7e4ba6b38225,DISK], DatanodeInfoWithStorage[127.0.0.1:43689,DS-43aa2c87-fda9-43ec-ad54-5f65f9bef8cb,DISK], DatanodeInfoWithStorage[127.0.0.1:33395,DS-333fb974-6b27-4140-a367-0fecceaaf284,DISK]] 2023-07-22 18:11:50,417 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:50,417 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-22 18:11:50,417 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-22 18:11:50,417 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-22 18:11:50,417 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-22 18:11:50,418 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:50,418 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-22 18:11:50,418 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-22 18:11:50,422 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-22 18:11:50,423 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/info 2023-07-22 18:11:50,423 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/info 2023-07-22 18:11:50,423 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-22 18:11:50,424 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:50,424 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-22 18:11:50,424 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/rep_barrier 2023-07-22 18:11:50,424 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/rep_barrier 2023-07-22 18:11:50,425 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-22 18:11:50,425 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:50,425 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-22 18:11:50,426 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/table 2023-07-22 18:11:50,426 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/table 2023-07-22 18:11:50,426 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-22 18:11:50,427 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:50,427 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740 2023-07-22 18:11:50,428 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740 2023-07-22 18:11:50,430 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-22 18:11:50,431 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-22 18:11:50,432 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10208201440, jitterRate=-0.04928715527057648}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-22 18:11:50,432 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-22 18:11:50,433 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1690049510391 2023-07-22 18:11:50,437 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-22 18:11:50,437 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-22 18:11:50,438 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,38757,1690049509462, state=OPEN 2023-07-22 18:11:50,439 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-22 18:11:50,439 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-22 18:11:50,442 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-22 18:11:50,442 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,38757,1690049509462 in 209 msec 2023-07-22 18:11:50,443 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-22 18:11:50,443 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 400 msec 2023-07-22 18:11:50,445 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 681 msec 2023-07-22 18:11:50,445 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1690049510445, completionTime=-1 2023-07-22 18:11:50,445 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-22 18:11:50,445 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-22 18:11:50,449 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-22 18:11:50,449 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1690049570449 2023-07-22 18:11:50,449 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1690049630449 2023-07-22 18:11:50,449 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 4 msec 2023-07-22 18:11:50,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44283,1690049508986-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:50,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44283,1690049508986-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:50,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44283,1690049508986-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:50,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:44283, period=300000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:50,455 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:50,456 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-22 18:11:50,456 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:50,457 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-22 18:11:50,457 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-22 18:11:50,458 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 18:11:50,459 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 18:11:50,460 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp/data/hbase/namespace/9f5da76f167889ae53c9bd8b306448b6 2023-07-22 18:11:50,461 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp/data/hbase/namespace/9f5da76f167889ae53c9bd8b306448b6 empty. 2023-07-22 18:11:50,461 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp/data/hbase/namespace/9f5da76f167889ae53c9bd8b306448b6 2023-07-22 18:11:50,461 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-22 18:11:50,476 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:50,477 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 9f5da76f167889ae53c9bd8b306448b6, NAME => 'hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp 2023-07-22 18:11:50,492 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:50,492 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 9f5da76f167889ae53c9bd8b306448b6, disabling compactions & flushes 2023-07-22 18:11:50,492 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6. 2023-07-22 18:11:50,492 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6. 2023-07-22 18:11:50,492 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6. after waiting 0 ms 2023-07-22 18:11:50,492 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6. 2023-07-22 18:11:50,492 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6. 2023-07-22 18:11:50,492 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 9f5da76f167889ae53c9bd8b306448b6: 2023-07-22 18:11:50,494 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 18:11:50,495 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690049510495"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049510495"}]},"ts":"1690049510495"} 2023-07-22 18:11:50,497 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 18:11:50,498 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 18:11:50,498 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049510498"}]},"ts":"1690049510498"} 2023-07-22 18:11:50,499 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-22 18:11:50,503 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:50,503 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:50,503 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:50,503 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:50,503 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:50,503 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=9f5da76f167889ae53c9bd8b306448b6, ASSIGN}] 2023-07-22 18:11:50,505 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=9f5da76f167889ae53c9bd8b306448b6, ASSIGN 2023-07-22 18:11:50,505 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=9f5da76f167889ae53c9bd8b306448b6, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46437,1690049509307; forceNewPlan=false, retain=false 2023-07-22 18:11:50,521 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-22 18:11:50,656 INFO [jenkins-hbase4:44283] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 18:11:50,657 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=9f5da76f167889ae53c9bd8b306448b6, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:50,657 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690049510657"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049510657"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049510657"}]},"ts":"1690049510657"} 2023-07-22 18:11:50,659 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=5, state=RUNNABLE; OpenRegionProcedure 9f5da76f167889ae53c9bd8b306448b6, server=jenkins-hbase4.apache.org,46437,1690049509307}] 2023-07-22 18:11:50,812 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:50,812 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-22 18:11:50,814 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52020, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-22 18:11:50,818 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6. 2023-07-22 18:11:50,818 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9f5da76f167889ae53c9bd8b306448b6, NAME => 'hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:50,819 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 9f5da76f167889ae53c9bd8b306448b6 2023-07-22 18:11:50,819 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:50,819 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9f5da76f167889ae53c9bd8b306448b6 2023-07-22 18:11:50,819 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9f5da76f167889ae53c9bd8b306448b6 2023-07-22 18:11:50,820 INFO [StoreOpener-9f5da76f167889ae53c9bd8b306448b6-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 9f5da76f167889ae53c9bd8b306448b6 2023-07-22 18:11:50,822 DEBUG [StoreOpener-9f5da76f167889ae53c9bd8b306448b6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/namespace/9f5da76f167889ae53c9bd8b306448b6/info 2023-07-22 18:11:50,822 DEBUG [StoreOpener-9f5da76f167889ae53c9bd8b306448b6-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/namespace/9f5da76f167889ae53c9bd8b306448b6/info 2023-07-22 18:11:50,822 INFO [StoreOpener-9f5da76f167889ae53c9bd8b306448b6-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9f5da76f167889ae53c9bd8b306448b6 columnFamilyName info 2023-07-22 18:11:50,823 INFO [StoreOpener-9f5da76f167889ae53c9bd8b306448b6-1] regionserver.HStore(310): Store=9f5da76f167889ae53c9bd8b306448b6/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:50,824 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/namespace/9f5da76f167889ae53c9bd8b306448b6 2023-07-22 18:11:50,824 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/namespace/9f5da76f167889ae53c9bd8b306448b6 2023-07-22 18:11:50,827 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9f5da76f167889ae53c9bd8b306448b6 2023-07-22 18:11:50,829 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/namespace/9f5da76f167889ae53c9bd8b306448b6/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:50,830 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9f5da76f167889ae53c9bd8b306448b6; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10682131840, jitterRate=-0.005148947238922119}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:50,830 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9f5da76f167889ae53c9bd8b306448b6: 2023-07-22 18:11:50,831 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6., pid=6, masterSystemTime=1690049510812 2023-07-22 18:11:50,834 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6. 2023-07-22 18:11:50,835 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6. 2023-07-22 18:11:50,835 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=9f5da76f167889ae53c9bd8b306448b6, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:50,835 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1690049510835"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049510835"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049510835"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049510835"}]},"ts":"1690049510835"} 2023-07-22 18:11:50,837 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=5 2023-07-22 18:11:50,838 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=5, state=SUCCESS; OpenRegionProcedure 9f5da76f167889ae53c9bd8b306448b6, server=jenkins-hbase4.apache.org,46437,1690049509307 in 177 msec 2023-07-22 18:11:50,839 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-22 18:11:50,839 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=9f5da76f167889ae53c9bd8b306448b6, ASSIGN in 335 msec 2023-07-22 18:11:50,840 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 18:11:50,840 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049510840"}]},"ts":"1690049510840"} 2023-07-22 18:11:50,841 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-22 18:11:50,843 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 18:11:50,844 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 387 msec 2023-07-22 18:11:50,857 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-22 18:11:50,859 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-22 18:11:50,860 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:50,862 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 18:11:50,863 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52030, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 18:11:50,865 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=7, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-22 18:11:50,872 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 18:11:50,874 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=7, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 8 msec 2023-07-22 18:11:50,877 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=8, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-22 18:11:50,881 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44283,1690049508986] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:50,882 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 18:11:50,882 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44283,1690049508986] procedure2.ProcedureExecutor(1029): Stored pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-22 18:11:50,888 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=8, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 11 msec 2023-07-22 18:11:50,889 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 18:11:50,889 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 18:11:50,891 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp/data/hbase/rsgroup/3266efdc48b1d617bfe5b06d6aa0ae7d 2023-07-22 18:11:50,891 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp/data/hbase/rsgroup/3266efdc48b1d617bfe5b06d6aa0ae7d empty. 2023-07-22 18:11:50,891 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp/data/hbase/rsgroup/3266efdc48b1d617bfe5b06d6aa0ae7d 2023-07-22 18:11:50,891 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-22 18:11:50,900 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-22 18:11:50,904 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-22 18:11:50,904 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.287sec 2023-07-22 18:11:50,910 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-22 18:11:50,910 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-22 18:11:50,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-22 18:11:50,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44283,1690049508986-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-22 18:11:50,911 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44283,1690049508986-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-22 18:11:50,911 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-22 18:11:50,915 DEBUG [Listener at localhost/32999] zookeeper.ReadOnlyZKClient(139): Connect 0x2ad0ad4c to 127.0.0.1:64378 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:50,916 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:50,920 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3266efdc48b1d617bfe5b06d6aa0ae7d, NAME => 'hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp 2023-07-22 18:11:50,921 DEBUG [Listener at localhost/32999] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6f8c5c0b, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:50,922 DEBUG [hconnection-0x17e40f1d-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 18:11:50,929 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50790, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 18:11:50,930 INFO [Listener at localhost/32999] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,44283,1690049508986 2023-07-22 18:11:50,930 INFO [Listener at localhost/32999] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:50,936 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:50,936 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 3266efdc48b1d617bfe5b06d6aa0ae7d, disabling compactions & flushes 2023-07-22 18:11:50,936 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d. 2023-07-22 18:11:50,936 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d. 2023-07-22 18:11:50,936 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d. after waiting 0 ms 2023-07-22 18:11:50,936 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d. 2023-07-22 18:11:50,936 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d. 2023-07-22 18:11:50,936 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 3266efdc48b1d617bfe5b06d6aa0ae7d: 2023-07-22 18:11:50,939 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 18:11:50,939 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690049510939"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049510939"}]},"ts":"1690049510939"} 2023-07-22 18:11:50,941 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 18:11:50,941 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 18:11:50,942 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049510941"}]},"ts":"1690049510941"} 2023-07-22 18:11:50,942 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-22 18:11:50,945 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:50,945 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:50,945 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:50,945 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:50,945 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:50,945 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=3266efdc48b1d617bfe5b06d6aa0ae7d, ASSIGN}] 2023-07-22 18:11:50,946 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=3266efdc48b1d617bfe5b06d6aa0ae7d, ASSIGN 2023-07-22 18:11:50,947 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=10, ppid=9, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=3266efdc48b1d617bfe5b06d6aa0ae7d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,38757,1690049509462; forceNewPlan=false, retain=false 2023-07-22 18:11:51,097 INFO [jenkins-hbase4:44283] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 18:11:51,099 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=3266efdc48b1d617bfe5b06d6aa0ae7d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,38757,1690049509462 2023-07-22 18:11:51,099 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690049511099"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049511099"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049511099"}]},"ts":"1690049511099"} 2023-07-22 18:11:51,101 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=11, ppid=10, state=RUNNABLE; OpenRegionProcedure 3266efdc48b1d617bfe5b06d6aa0ae7d, server=jenkins-hbase4.apache.org,38757,1690049509462}] 2023-07-22 18:11:51,256 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d. 2023-07-22 18:11:51,256 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3266efdc48b1d617bfe5b06d6aa0ae7d, NAME => 'hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:51,256 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-22 18:11:51,256 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d. service=MultiRowMutationService 2023-07-22 18:11:51,256 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-22 18:11:51,256 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 3266efdc48b1d617bfe5b06d6aa0ae7d 2023-07-22 18:11:51,256 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:51,256 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3266efdc48b1d617bfe5b06d6aa0ae7d 2023-07-22 18:11:51,256 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3266efdc48b1d617bfe5b06d6aa0ae7d 2023-07-22 18:11:51,258 INFO [StoreOpener-3266efdc48b1d617bfe5b06d6aa0ae7d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 3266efdc48b1d617bfe5b06d6aa0ae7d 2023-07-22 18:11:51,259 DEBUG [StoreOpener-3266efdc48b1d617bfe5b06d6aa0ae7d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/rsgroup/3266efdc48b1d617bfe5b06d6aa0ae7d/m 2023-07-22 18:11:51,259 DEBUG [StoreOpener-3266efdc48b1d617bfe5b06d6aa0ae7d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/rsgroup/3266efdc48b1d617bfe5b06d6aa0ae7d/m 2023-07-22 18:11:51,259 INFO [StoreOpener-3266efdc48b1d617bfe5b06d6aa0ae7d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3266efdc48b1d617bfe5b06d6aa0ae7d columnFamilyName m 2023-07-22 18:11:51,260 INFO [StoreOpener-3266efdc48b1d617bfe5b06d6aa0ae7d-1] regionserver.HStore(310): Store=3266efdc48b1d617bfe5b06d6aa0ae7d/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:51,261 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/rsgroup/3266efdc48b1d617bfe5b06d6aa0ae7d 2023-07-22 18:11:51,261 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/rsgroup/3266efdc48b1d617bfe5b06d6aa0ae7d 2023-07-22 18:11:51,263 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3266efdc48b1d617bfe5b06d6aa0ae7d 2023-07-22 18:11:51,266 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/rsgroup/3266efdc48b1d617bfe5b06d6aa0ae7d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:51,266 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3266efdc48b1d617bfe5b06d6aa0ae7d; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@22b70ecf, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:51,266 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3266efdc48b1d617bfe5b06d6aa0ae7d: 2023-07-22 18:11:51,271 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d., pid=11, masterSystemTime=1690049511252 2023-07-22 18:11:51,273 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=10 updating hbase:meta row=3266efdc48b1d617bfe5b06d6aa0ae7d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,38757,1690049509462 2023-07-22 18:11:51,273 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1690049511272"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049511272"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049511272"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049511272"}]},"ts":"1690049511272"} 2023-07-22 18:11:51,274 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d. 2023-07-22 18:11:51,274 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d. 2023-07-22 18:11:51,275 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=11, resume processing ppid=10 2023-07-22 18:11:51,275 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, ppid=10, state=SUCCESS; OpenRegionProcedure 3266efdc48b1d617bfe5b06d6aa0ae7d, server=jenkins-hbase4.apache.org,38757,1690049509462 in 173 msec 2023-07-22 18:11:51,277 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=10, resume processing ppid=9 2023-07-22 18:11:51,277 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, ppid=9, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=3266efdc48b1d617bfe5b06d6aa0ae7d, ASSIGN in 330 msec 2023-07-22 18:11:51,277 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 18:11:51,278 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049511278"}]},"ts":"1690049511278"} 2023-07-22 18:11:51,279 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-22 18:11:51,282 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=9, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 18:11:51,283 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=9, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 401 msec 2023-07-22 18:11:51,386 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44283,1690049508986] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-22 18:11:51,386 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44283,1690049508986] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-22 18:11:51,390 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:51,390 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44283,1690049508986] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:51,391 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44283,1690049508986] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-22 18:11:51,392 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44283,1690049508986] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-22 18:11:51,434 DEBUG [Listener at localhost/32999] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-22 18:11:51,436 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33458, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-22 18:11:51,439 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-22 18:11:51,439 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:51,440 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-22 18:11:51,441 DEBUG [Listener at localhost/32999] zookeeper.ReadOnlyZKClient(139): Connect 0x68e7269f to 127.0.0.1:64378 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:51,445 DEBUG [Listener at localhost/32999] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5fcc4079, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:51,446 INFO [Listener at localhost/32999] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:64378 2023-07-22 18:11:51,450 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 18:11:51,452 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1018e3b796b000a connected 2023-07-22 18:11:51,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:51,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:51,458 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-22 18:11:51,470 INFO [Listener at localhost/32999] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-22 18:11:51,470 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:51,470 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:51,470 INFO [Listener at localhost/32999] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-22 18:11:51,470 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-22 18:11:51,470 INFO [Listener at localhost/32999] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-22 18:11:51,470 INFO [Listener at localhost/32999] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-22 18:11:51,471 INFO [Listener at localhost/32999] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33817 2023-07-22 18:11:51,471 INFO [Listener at localhost/32999] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-22 18:11:51,474 DEBUG [Listener at localhost/32999] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-22 18:11:51,474 INFO [Listener at localhost/32999] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:51,475 INFO [Listener at localhost/32999] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-22 18:11:51,476 INFO [Listener at localhost/32999] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33817 connecting to ZooKeeper ensemble=127.0.0.1:64378 2023-07-22 18:11:51,483 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:338170x0, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-22 18:11:51,485 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33817-0x1018e3b796b000b connected 2023-07-22 18:11:51,485 DEBUG [Listener at localhost/32999] zookeeper.ZKUtil(162): regionserver:33817-0x1018e3b796b000b, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-22 18:11:51,485 DEBUG [Listener at localhost/32999] zookeeper.ZKUtil(162): regionserver:33817-0x1018e3b796b000b, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-22 18:11:51,486 DEBUG [Listener at localhost/32999] zookeeper.ZKUtil(164): regionserver:33817-0x1018e3b796b000b, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-22 18:11:51,486 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33817 2023-07-22 18:11:51,486 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33817 2023-07-22 18:11:51,490 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33817 2023-07-22 18:11:51,490 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33817 2023-07-22 18:11:51,490 DEBUG [Listener at localhost/32999] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33817 2023-07-22 18:11:51,492 INFO [Listener at localhost/32999] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-22 18:11:51,492 INFO [Listener at localhost/32999] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-22 18:11:51,492 INFO [Listener at localhost/32999] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-22 18:11:51,493 INFO [Listener at localhost/32999] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-22 18:11:51,493 INFO [Listener at localhost/32999] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-22 18:11:51,493 INFO [Listener at localhost/32999] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-22 18:11:51,493 INFO [Listener at localhost/32999] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-22 18:11:51,494 INFO [Listener at localhost/32999] http.HttpServer(1146): Jetty bound to port 34307 2023-07-22 18:11:51,494 INFO [Listener at localhost/32999] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-22 18:11:51,495 INFO [Listener at localhost/32999] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:51,495 INFO [Listener at localhost/32999] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@74695aa5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/hadoop.log.dir/,AVAILABLE} 2023-07-22 18:11:51,495 INFO [Listener at localhost/32999] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:51,496 INFO [Listener at localhost/32999] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@10a95ce4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-22 18:11:51,608 INFO [Listener at localhost/32999] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-22 18:11:51,609 INFO [Listener at localhost/32999] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-22 18:11:51,609 INFO [Listener at localhost/32999] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-22 18:11:51,609 INFO [Listener at localhost/32999] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-22 18:11:51,610 INFO [Listener at localhost/32999] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-22 18:11:51,610 INFO [Listener at localhost/32999] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@66a7c817{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/java.io.tmpdir/jetty-0_0_0_0-34307-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4994163187763522317/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:51,612 INFO [Listener at localhost/32999] server.AbstractConnector(333): Started ServerConnector@43995fad{HTTP/1.1, (http/1.1)}{0.0.0.0:34307} 2023-07-22 18:11:51,612 INFO [Listener at localhost/32999] server.Server(415): Started @45565ms 2023-07-22 18:11:51,614 INFO [RS:3;jenkins-hbase4:33817] regionserver.HRegionServer(951): ClusterId : 21deeca3-acd1-4ccc-917a-1f9039f825b8 2023-07-22 18:11:51,614 DEBUG [RS:3;jenkins-hbase4:33817] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-22 18:11:51,616 DEBUG [RS:3;jenkins-hbase4:33817] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-22 18:11:51,616 DEBUG [RS:3;jenkins-hbase4:33817] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-22 18:11:51,618 DEBUG [RS:3;jenkins-hbase4:33817] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-22 18:11:51,619 DEBUG [RS:3;jenkins-hbase4:33817] zookeeper.ReadOnlyZKClient(139): Connect 0x3efa07c0 to 127.0.0.1:64378 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-22 18:11:51,624 DEBUG [RS:3;jenkins-hbase4:33817] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@303dfe6f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-22 18:11:51,624 DEBUG [RS:3;jenkins-hbase4:33817] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@34026cb0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 18:11:51,632 DEBUG [RS:3;jenkins-hbase4:33817] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:33817 2023-07-22 18:11:51,632 INFO [RS:3;jenkins-hbase4:33817] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-22 18:11:51,632 INFO [RS:3;jenkins-hbase4:33817] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-22 18:11:51,632 DEBUG [RS:3;jenkins-hbase4:33817] regionserver.HRegionServer(1022): About to register with Master. 2023-07-22 18:11:51,633 INFO [RS:3;jenkins-hbase4:33817] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44283,1690049508986 with isa=jenkins-hbase4.apache.org/172.31.14.131:33817, startcode=1690049511469 2023-07-22 18:11:51,633 DEBUG [RS:3;jenkins-hbase4:33817] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-22 18:11:51,635 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:54853, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-22 18:11:51,635 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44283] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33817,1690049511469 2023-07-22 18:11:51,635 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44283,1690049508986] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-22 18:11:51,636 DEBUG [RS:3;jenkins-hbase4:33817] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e 2023-07-22 18:11:51,636 DEBUG [RS:3;jenkins-hbase4:33817] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:35975 2023-07-22 18:11:51,636 DEBUG [RS:3;jenkins-hbase4:33817] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=44621 2023-07-22 18:11:51,644 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:51,644 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:51,644 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44283,1690049508986] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:51,644 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:51,644 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:51,644 DEBUG [RS:3;jenkins-hbase4:33817] zookeeper.ZKUtil(162): regionserver:33817-0x1018e3b796b000b, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33817,1690049511469 2023-07-22 18:11:51,644 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44283,1690049508986] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-22 18:11:51,644 WARN [RS:3;jenkins-hbase4:33817] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-22 18:11:51,644 INFO [RS:3;jenkins-hbase4:33817] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-22 18:11:51,645 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33817,1690049511469] 2023-07-22 18:11:51,645 DEBUG [RS:3;jenkins-hbase4:33817] regionserver.HRegionServer(1948): logDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/WALs/jenkins-hbase4.apache.org,33817,1690049511469 2023-07-22 18:11:51,645 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38645,1690049509158 2023-07-22 18:11:51,645 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38645,1690049509158 2023-07-22 18:11:51,645 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38645,1690049509158 2023-07-22 18:11:51,646 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44283,1690049508986] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-22 18:11:51,646 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33817,1690049511469 2023-07-22 18:11:51,646 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33817,1690049511469 2023-07-22 18:11:51,646 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33817,1690049511469 2023-07-22 18:11:51,646 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:51,646 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:51,646 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:51,646 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38757,1690049509462 2023-07-22 18:11:51,647 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38757,1690049509462 2023-07-22 18:11:51,646 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38757,1690049509462 2023-07-22 18:11:51,653 DEBUG [RS:3;jenkins-hbase4:33817] zookeeper.ZKUtil(162): regionserver:33817-0x1018e3b796b000b, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38645,1690049509158 2023-07-22 18:11:51,654 DEBUG [RS:3;jenkins-hbase4:33817] zookeeper.ZKUtil(162): regionserver:33817-0x1018e3b796b000b, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33817,1690049511469 2023-07-22 18:11:51,654 DEBUG [RS:3;jenkins-hbase4:33817] zookeeper.ZKUtil(162): regionserver:33817-0x1018e3b796b000b, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:51,654 DEBUG [RS:3;jenkins-hbase4:33817] zookeeper.ZKUtil(162): regionserver:33817-0x1018e3b796b000b, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,38757,1690049509462 2023-07-22 18:11:51,655 DEBUG [RS:3;jenkins-hbase4:33817] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-22 18:11:51,655 INFO [RS:3;jenkins-hbase4:33817] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-22 18:11:51,657 INFO [RS:3;jenkins-hbase4:33817] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-22 18:11:51,657 INFO [RS:3;jenkins-hbase4:33817] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-22 18:11:51,657 INFO [RS:3;jenkins-hbase4:33817] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:51,658 INFO [RS:3;jenkins-hbase4:33817] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-22 18:11:51,659 INFO [RS:3;jenkins-hbase4:33817] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:51,659 DEBUG [RS:3;jenkins-hbase4:33817] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:51,659 DEBUG [RS:3;jenkins-hbase4:33817] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:51,659 DEBUG [RS:3;jenkins-hbase4:33817] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:51,659 DEBUG [RS:3;jenkins-hbase4:33817] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:51,659 DEBUG [RS:3;jenkins-hbase4:33817] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:51,659 DEBUG [RS:3;jenkins-hbase4:33817] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-22 18:11:51,659 DEBUG [RS:3;jenkins-hbase4:33817] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:51,659 DEBUG [RS:3;jenkins-hbase4:33817] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:51,659 DEBUG [RS:3;jenkins-hbase4:33817] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:51,660 DEBUG [RS:3;jenkins-hbase4:33817] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-22 18:11:51,660 INFO [RS:3;jenkins-hbase4:33817] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:51,660 INFO [RS:3;jenkins-hbase4:33817] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:51,661 INFO [RS:3;jenkins-hbase4:33817] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:51,671 INFO [RS:3;jenkins-hbase4:33817] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-22 18:11:51,671 INFO [RS:3;jenkins-hbase4:33817] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33817,1690049511469-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-22 18:11:51,684 INFO [RS:3;jenkins-hbase4:33817] regionserver.Replication(203): jenkins-hbase4.apache.org,33817,1690049511469 started 2023-07-22 18:11:51,684 INFO [RS:3;jenkins-hbase4:33817] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33817,1690049511469, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33817, sessionid=0x1018e3b796b000b 2023-07-22 18:11:51,684 DEBUG [RS:3;jenkins-hbase4:33817] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-22 18:11:51,684 DEBUG [RS:3;jenkins-hbase4:33817] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33817,1690049511469 2023-07-22 18:11:51,684 DEBUG [RS:3;jenkins-hbase4:33817] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33817,1690049511469' 2023-07-22 18:11:51,684 DEBUG [RS:3;jenkins-hbase4:33817] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-22 18:11:51,684 DEBUG [RS:3;jenkins-hbase4:33817] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-22 18:11:51,685 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:51,685 DEBUG [RS:3;jenkins-hbase4:33817] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-22 18:11:51,685 DEBUG [RS:3;jenkins-hbase4:33817] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-22 18:11:51,685 DEBUG [RS:3;jenkins-hbase4:33817] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33817,1690049511469 2023-07-22 18:11:51,685 DEBUG [RS:3;jenkins-hbase4:33817] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33817,1690049511469' 2023-07-22 18:11:51,685 DEBUG [RS:3;jenkins-hbase4:33817] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-22 18:11:51,685 DEBUG [RS:3;jenkins-hbase4:33817] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-22 18:11:51,686 DEBUG [RS:3;jenkins-hbase4:33817] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-22 18:11:51,686 INFO [RS:3;jenkins-hbase4:33817] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-22 18:11:51,686 INFO [RS:3;jenkins-hbase4:33817] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-22 18:11:51,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:51,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:51,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:51,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:51,691 DEBUG [hconnection-0xc6b1b47-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-22 18:11:51,693 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:50798, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-22 18:11:51,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:51,698 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:51,701 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44283] to rsgroup master 2023-07-22 18:11:51,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:51,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:33458 deadline: 1690050711701, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. 2023-07-22 18:11:51,701 WARN [Listener at localhost/32999] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:51,703 INFO [Listener at localhost/32999] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:51,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:51,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:51,703 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33817, jenkins-hbase4.apache.org:38645, jenkins-hbase4.apache.org:38757, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:51,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:51,704 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:51,753 INFO [Listener at localhost/32999] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=557 (was 513) Potentially hanging thread: qtp1419808726-2299 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1350580402-2315 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1419808726-2304 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/38083-SendThread(127.0.0.1:56348) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:369) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1137) Potentially hanging thread: DataNode DiskChecker thread 1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1266385466-172.31.14.131-1690049508270:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32999-SendThread(127.0.0.1:64378) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server handler 3 on default port 34273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2072683135_17 at /127.0.0.1:43184 [Receiving block BP-1266385466-172.31.14.131-1690049508270:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 168248853@qtp-1880385469-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4a70d777-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4a70d777-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_322105248_17 at /127.0.0.1:51056 [Receiving block BP-1266385466-172.31.14.131-1690049508270:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@60f22bef[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32999-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/32999-SendThread(127.0.0.1:64378) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=33817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46437 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2140795073-2269 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1266385466-172.31.14.131-1690049508270:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x665b7fd1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1294521150.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38757 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x665b7fd1-SendThread(127.0.0.1:64378) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: PacketResponder: BP-1266385466-172.31.14.131-1690049508270:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1648876835-2242 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 427963865@qtp-1712522677-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: hconnection-0x4a70d777-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (638480775) connection to localhost/127.0.0.1:35975 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 382048936@qtp-1353687190-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39359 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: IPC Server handler 3 on default port 35975 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:35975 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32999-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56348@0x456bd6f7-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: 297498938@qtp-1866574650-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33815 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@582701c7 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0xc6b1b47-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46437 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 2 on default port 34273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46437 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-22643add-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1648876835-2237 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1266385466-172.31.14.131-1690049508270:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=33817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x68e7269f-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server handler 4 on default port 40541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x1caaede6-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: jenkins-hbase4:38645Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@22366487 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56348@0x456bd6f7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1294521150.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46437 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1766243457-2580 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x3efa07c0-SendThread(127.0.0.1:64378) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0xc6b1b47-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 32999 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@3fe22ac8 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2072683135_17 at /127.0.0.1:50364 [Receiving block BP-1266385466-172.31.14.131-1690049508270:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1350580402-2311 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x665b7fd1-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:35975 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x68e7269f-SendThread(127.0.0.1:64378) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@32f19bdc sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46437 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2140795073-2272 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32999 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46437 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 1 on default port 32999 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: CacheReplicationMonitor(1253741136) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: IPC Server handler 2 on default port 35975 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@11a5b089 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1648876835-2238-acceptor-0@763fdb7d-ServerConnector@25e91bdf{HTTP/1.1, (http/1.1)}{0.0.0.0:34037} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32999-SendThread(127.0.0.1:64378) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x51aab894-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: hconnection-0x4a70d777-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x6da1db12-SendThread(127.0.0.1:64378) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1645402756_17 at /127.0.0.1:55028 [Receiving block BP-1266385466-172.31.14.131-1690049508270:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 35975 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-96619661_17 at /127.0.0.1:42554 [Waiting for operation #3] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:43261 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32999-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:56348@0x456bd6f7-SendThread(127.0.0.1:56348) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:228) org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1338) org.apache.zookeeper.ClientCnxn$SendThread.cleanAndNotifyState(ClientCnxn.java:1276) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1254) Potentially hanging thread: IPC Client (638480775) connection to localhost/127.0.0.1:35975 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server idle connection scanner for port 35975 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1645402756_17 at /127.0.0.1:43146 [Receiving block BP-1266385466-172.31.14.131-1690049508270:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 40541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1350580402-2312-acceptor-0@14947463-ServerConnector@5e47ccd2{HTTP/1.1, (http/1.1)}{0.0.0.0:39739} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=33817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x6da1db12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1294521150.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ForkJoinPool-3-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS:3;jenkins-hbase4:33817-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@5e2487a8 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 2054510413@qtp-1712522677-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44285 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x1caaede6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1294521150.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_322105248_17 at /127.0.0.1:50378 [Receiving block BP-1266385466-172.31.14.131-1690049508270:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1350580402-2314 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x2ad0ad4c sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1294521150.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1766243457-2578 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4a70d777-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_322105248_17 at /127.0.0.1:35442 [Receiving block BP-1266385466-172.31.14.131-1690049508270:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 34273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: jenkins-hbase4:44283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: Listener at localhost/32999-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Listener at localhost/32999.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RS:0;jenkins-hbase4:38645-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1291144979-2206 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x1caaede6-SendThread(127.0.0.1:64378) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS:1;jenkins-hbase4:46437-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1266385466-172.31.14.131-1690049508270:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:44283 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 40541 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1766243457-2574-acceptor-0@13a5f950-ServerConnector@43995fad{HTTP/1.1, (http/1.1)}{0.0.0.0:34307} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1266385466-172.31.14.131-1690049508270:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 0 on default port 40541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1645402756_17 at /127.0.0.1:50322 [Receiving block BP-1266385466-172.31.14.131-1690049508270:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (638480775) connection to localhost/127.0.0.1:43261 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp2140795073-2271 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e-prefix:jenkins-hbase4.apache.org,38757,1690049509462.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1648876835-2240 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4a70d777-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2140795073-2267 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4a70d777-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 3 on default port 32999 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: jenkins-hbase4:46437Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (638480775) connection to localhost/127.0.0.1:43261 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1291144979-2212 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/MasterData-prefix:jenkins-hbase4.apache.org,44283,1690049508986 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-2972cbd7-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: region-location-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1081) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@641eba01[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:43261 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=38757 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x68e7269f sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1294521150.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@58468d52 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e-prefix:jenkins-hbase4.apache.org,38757,1690049509462 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1266385466-172.31.14.131-1690049508270:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-1266385466-172.31.14.131-1690049508270 heartbeating to localhost/127.0.0.1:35975 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46437 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:2;jenkins-hbase4:38757 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 32999 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44283,1690049508986 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x6da1db12-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins@localhost:43261 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 34273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ForkJoinPool-3-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_322105248_17 at /127.0.0.1:54988 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46437 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x2ad0ad4c-SendThread(127.0.0.1:64378) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:35975 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (638480775) connection to localhost/127.0.0.1:35975 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: BP-1266385466-172.31.14.131-1690049508270 heartbeating to localhost/127.0.0.1:35975 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-1266385466-172.31.14.131-1690049508270:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46437 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1648876835-2239 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (638480775) connection to localhost/127.0.0.1:35975 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1291144979-2210 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (638480775) connection to localhost/127.0.0.1:43261 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:64378 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690049509794 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: qtp1419808726-2301 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:33817Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:3;jenkins-hbase4:33817 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2140795073-2274 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1291144979-2209 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-96619661_17 at /127.0.0.1:55058 [Receiving block BP-1266385466-172.31.14.131-1690049508270:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1266385466-172.31.14.131-1690049508270:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32999-SendThread(127.0.0.1:64378) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1648876835-2244 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/dfs/data/data1/current/BP-1266385466-172.31.14.131-1690049508270 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1350580402-2309 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1419808726-2303 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@6dfc7fd7[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=33817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1419808726-2300 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 40541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1291144979-2208 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32999-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Server idle connection scanner for port 34273 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/dfs/data/data4/current/BP-1266385466-172.31.14.131-1690049508270 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32999.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1266385466-172.31.14.131-1690049508270:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=38757 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=38757 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/32999-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-96619661_17 at /127.0.0.1:50350 [Receiving block BP-1266385466-172.31.14.131-1690049508270:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32999-SendThread(127.0.0.1:64378) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2140795073-2268-acceptor-0@32c5a4cc-ServerConnector@46096509{HTTP/1.1, (http/1.1)}{0.0.0.0:37849} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 35975 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (638480775) connection to localhost/127.0.0.1:35975 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1291144979-2207-acceptor-0@4fc5675e-ServerConnector@79271683{HTTP/1.1, (http/1.1)}{0.0.0.0:44621} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: Listener at localhost/32999.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:35975 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=38757 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2140795073-2273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-96619661_17 at /127.0.0.1:43172 [Receiving block BP-1266385466-172.31.14.131-1690049508270:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1266385466-172.31.14.131-1690049508270:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:46437 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x51aab894 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1294521150.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x3efa07c0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/1294521150.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2140795073-2270 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e-prefix:jenkins-hbase4.apache.org,38645,1690049509158 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1419808726-2297 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@4f269b40 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@28b0b21b java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1788318837@qtp-1880385469-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35165 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:38757-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1350580402-2308 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@164e4df3 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1350580402-2310 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1213190842@qtp-1353687190-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=38757 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/dfs/data/data3/current/BP-1266385466-172.31.14.131-1690049508270 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 32999 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: BP-1266385466-172.31.14.131-1690049508270 heartbeating to localhost/127.0.0.1:35975 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=33817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1291144979-2211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1766243457-2577 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-43f53346-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/dfs/data/data2/current/BP-1266385466-172.31.14.131-1690049508270 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1350580402-2313 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 4 on default port 35975 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:43261 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,33193,1690049503617 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: PacketResponder: BP-1266385466-172.31.14.131-1690049508270:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=33817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1419808726-2298-acceptor-0@2f4ffe65-ServerConnector@12826d0e{HTTP/1.1, (http/1.1)}{0.0.0.0:35669} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x51aab894-SendThread(127.0.0.1:64378) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=38757 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: 1508434648@qtp-1866574650-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Listener at localhost/32999.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: IPC Server handler 1 on default port 40541 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-1266385466-172.31.14.131-1690049508270:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (638480775) connection to localhost/127.0.0.1:43261 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=33817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Client (638480775) connection to localhost/127.0.0.1:43261 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/dfs/data/data6/current/BP-1266385466-172.31.14.131-1690049508270 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1419808726-2302 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:0;jenkins-hbase4:38645 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@166c7323 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x3efa07c0-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ProcessThread(sid:0 cport:64378): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=38645 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: PacketResponder: BP-1266385466-172.31.14.131-1690049508270:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690049509799 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1766243457-2579 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1766243457-2575 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=38757 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=38757 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e-prefix:jenkins-hbase4.apache.org,46437,1690049509307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x4a70d777-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:64378@0x2ad0ad4c-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1291144979-2213 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44283 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:38757Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@1c50d6e9 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_322105248_17 at /127.0.0.1:55082 [Receiving block BP-1266385466-172.31.14.131-1690049508270:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/32999-SendThread(127.0.0.1:64378) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1648876835-2243 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2072683135_17 at /127.0.0.1:55074 [Receiving block BP-1266385466-172.31.14.131-1690049508270:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-1266385466-172.31.14.131-1690049508270:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1648876835-2241 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 0 on default port 32999 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (638480775) connection to localhost/127.0.0.1:35975 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1766243457-2576 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=38757 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1766243457-2573 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/1343617243.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/38083-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_322105248_17 at /127.0.0.1:42556 [Receiving block BP-1266385466-172.31.14.131-1690049508270:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Session-HouseKeeper-4874feda-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@69823956 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 34273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_322105248_17 at /127.0.0.1:43190 [Receiving block BP-1266385466-172.31.14.131-1690049508270:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-12ac4b0f-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x17e40f1d-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/dfs/data/data5/current/BP-1266385466-172.31.14.131-1690049508270 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46437 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) - Thread LEAK? -, OpenFileDescriptor=813 (was 787) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=394 (was 420), ProcessCount=172 (was 172), AvailableMemoryMB=8216 (was 8358) 2023-07-22 18:11:51,756 WARN [Listener at localhost/32999] hbase.ResourceChecker(130): Thread=557 is superior to 500 2023-07-22 18:11:51,773 INFO [Listener at localhost/32999] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=555, OpenFileDescriptor=813, MaxFileDescriptor=60000, SystemLoadAverage=394, ProcessCount=172, AvailableMemoryMB=8215 2023-07-22 18:11:51,773 WARN [Listener at localhost/32999] hbase.ResourceChecker(130): Thread=555 is superior to 500 2023-07-22 18:11:51,774 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-22 18:11:51,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:51,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:51,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:51,778 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:51,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:51,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:51,779 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:51,780 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:51,782 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:51,783 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:51,785 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:51,787 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:51,788 INFO [RS:3;jenkins-hbase4:33817] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33817%2C1690049511469, suffix=, logDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/WALs/jenkins-hbase4.apache.org,33817,1690049511469, archiveDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/oldWALs, maxLogs=32 2023-07-22 18:11:51,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:51,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:51,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:51,792 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:51,793 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:51,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:51,795 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:51,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44283] to rsgroup master 2023-07-22 18:11:51,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:51,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:33458 deadline: 1690050711797, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. 2023-07-22 18:11:51,798 WARN [Listener at localhost/32999] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:51,799 INFO [Listener at localhost/32999] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:51,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:51,800 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:51,800 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33817, jenkins-hbase4.apache.org:38645, jenkins-hbase4.apache.org:38757, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:51,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:51,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:51,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:51,804 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-22 18:11:51,805 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 18:11:51,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-22 18:11:51,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-22 18:11:51,813 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:51,814 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:33395,DS-333fb974-6b27-4140-a367-0fecceaaf284,DISK] 2023-07-22 18:11:51,814 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38427,DS-4070490d-8f75-4434-8c16-7e4ba6b38225,DISK] 2023-07-22 18:11:51,814 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43689,DS-43aa2c87-fda9-43ec-ad54-5f65f9bef8cb,DISK] 2023-07-22 18:11:51,814 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:51,815 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:51,816 INFO [RS:3;jenkins-hbase4:33817] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/WALs/jenkins-hbase4.apache.org,33817,1690049511469/jenkins-hbase4.apache.org%2C33817%2C1690049511469.1690049511788 2023-07-22 18:11:51,816 DEBUG [RS:3;jenkins-hbase4:33817] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:33395,DS-333fb974-6b27-4140-a367-0fecceaaf284,DISK], DatanodeInfoWithStorage[127.0.0.1:38427,DS-4070490d-8f75-4434-8c16-7e4ba6b38225,DISK], DatanodeInfoWithStorage[127.0.0.1:43689,DS-43aa2c87-fda9-43ec-ad54-5f65f9bef8cb,DISK]] 2023-07-22 18:11:51,817 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-22 18:11:51,818 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp/data/default/t1/cbbeda9c18f1c46ddd38d458eae38699 2023-07-22 18:11:51,819 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp/data/default/t1/cbbeda9c18f1c46ddd38d458eae38699 empty. 2023-07-22 18:11:51,819 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp/data/default/t1/cbbeda9c18f1c46ddd38d458eae38699 2023-07-22 18:11:51,819 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-22 18:11:51,836 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-22 18:11:51,837 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => cbbeda9c18f1c46ddd38d458eae38699, NAME => 't1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp 2023-07-22 18:11:51,845 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:51,846 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing cbbeda9c18f1c46ddd38d458eae38699, disabling compactions & flushes 2023-07-22 18:11:51,846 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699. 2023-07-22 18:11:51,846 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699. 2023-07-22 18:11:51,846 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699. after waiting 0 ms 2023-07-22 18:11:51,846 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699. 2023-07-22 18:11:51,846 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699. 2023-07-22 18:11:51,846 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for cbbeda9c18f1c46ddd38d458eae38699: 2023-07-22 18:11:51,848 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-22 18:11:51,849 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690049511849"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049511849"}]},"ts":"1690049511849"} 2023-07-22 18:11:51,850 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-22 18:11:51,851 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-22 18:11:51,851 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049511851"}]},"ts":"1690049511851"} 2023-07-22 18:11:51,852 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-22 18:11:51,856 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-22 18:11:51,856 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-22 18:11:51,856 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-22 18:11:51,856 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-22 18:11:51,856 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-22 18:11:51,856 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-22 18:11:51,856 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=cbbeda9c18f1c46ddd38d458eae38699, ASSIGN}] 2023-07-22 18:11:51,857 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=cbbeda9c18f1c46ddd38d458eae38699, ASSIGN 2023-07-22 18:11:51,862 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=cbbeda9c18f1c46ddd38d458eae38699, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,46437,1690049509307; forceNewPlan=false, retain=false 2023-07-22 18:11:51,908 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-22 18:11:51,958 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-22 18:11:51,959 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-22 18:11:51,959 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 18:11:51,959 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-22 18:11:51,959 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-22 18:11:51,959 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-22 18:11:52,012 INFO [jenkins-hbase4:44283] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-22 18:11:52,013 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=cbbeda9c18f1c46ddd38d458eae38699, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:52,014 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690049512013"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049512013"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049512013"}]},"ts":"1690049512013"} 2023-07-22 18:11:52,015 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure cbbeda9c18f1c46ddd38d458eae38699, server=jenkins-hbase4.apache.org,46437,1690049509307}] 2023-07-22 18:11:52,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-22 18:11:52,171 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699. 2023-07-22 18:11:52,171 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cbbeda9c18f1c46ddd38d458eae38699, NAME => 't1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699.', STARTKEY => '', ENDKEY => ''} 2023-07-22 18:11:52,171 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 cbbeda9c18f1c46ddd38d458eae38699 2023-07-22 18:11:52,171 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-22 18:11:52,171 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cbbeda9c18f1c46ddd38d458eae38699 2023-07-22 18:11:52,171 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cbbeda9c18f1c46ddd38d458eae38699 2023-07-22 18:11:52,172 INFO [StoreOpener-cbbeda9c18f1c46ddd38d458eae38699-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region cbbeda9c18f1c46ddd38d458eae38699 2023-07-22 18:11:52,174 DEBUG [StoreOpener-cbbeda9c18f1c46ddd38d458eae38699-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/default/t1/cbbeda9c18f1c46ddd38d458eae38699/cf1 2023-07-22 18:11:52,174 DEBUG [StoreOpener-cbbeda9c18f1c46ddd38d458eae38699-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/default/t1/cbbeda9c18f1c46ddd38d458eae38699/cf1 2023-07-22 18:11:52,174 INFO [StoreOpener-cbbeda9c18f1c46ddd38d458eae38699-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cbbeda9c18f1c46ddd38d458eae38699 columnFamilyName cf1 2023-07-22 18:11:52,175 INFO [StoreOpener-cbbeda9c18f1c46ddd38d458eae38699-1] regionserver.HStore(310): Store=cbbeda9c18f1c46ddd38d458eae38699/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-22 18:11:52,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/default/t1/cbbeda9c18f1c46ddd38d458eae38699 2023-07-22 18:11:52,176 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/default/t1/cbbeda9c18f1c46ddd38d458eae38699 2023-07-22 18:11:52,178 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cbbeda9c18f1c46ddd38d458eae38699 2023-07-22 18:11:52,180 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/default/t1/cbbeda9c18f1c46ddd38d458eae38699/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-22 18:11:52,181 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cbbeda9c18f1c46ddd38d458eae38699; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11383137120, jitterRate=0.06013725697994232}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-22 18:11:52,181 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cbbeda9c18f1c46ddd38d458eae38699: 2023-07-22 18:11:52,181 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699., pid=14, masterSystemTime=1690049512167 2023-07-22 18:11:52,183 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699. 2023-07-22 18:11:52,183 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699. 2023-07-22 18:11:52,183 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=cbbeda9c18f1c46ddd38d458eae38699, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:52,183 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690049512183"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1690049512183"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1690049512183"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1690049512183"}]},"ts":"1690049512183"} 2023-07-22 18:11:52,186 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-22 18:11:52,186 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure cbbeda9c18f1c46ddd38d458eae38699, server=jenkins-hbase4.apache.org,46437,1690049509307 in 170 msec 2023-07-22 18:11:52,187 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-22 18:11:52,187 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=cbbeda9c18f1c46ddd38d458eae38699, ASSIGN in 330 msec 2023-07-22 18:11:52,188 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-22 18:11:52,188 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049512188"}]},"ts":"1690049512188"} 2023-07-22 18:11:52,189 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-22 18:11:52,191 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-22 18:11:52,192 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 388 msec 2023-07-22 18:11:52,411 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-22 18:11:52,411 INFO [Listener at localhost/32999] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-22 18:11:52,411 DEBUG [Listener at localhost/32999] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-22 18:11:52,411 INFO [Listener at localhost/32999] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:52,413 INFO [Listener at localhost/32999] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-22 18:11:52,413 INFO [Listener at localhost/32999] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:52,414 INFO [Listener at localhost/32999] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-22 18:11:52,415 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-22 18:11:52,416 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-22 18:11:52,418 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-22 18:11:52,418 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-22 18:11:52,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 354 connection: 172.31.14.131:33458 deadline: 1690049572415, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-22 18:11:52,420 INFO [Listener at localhost/32999] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:52,421 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=6 msec 2023-07-22 18:11:52,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:52,521 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:52,522 INFO [Listener at localhost/32999] client.HBaseAdmin$15(890): Started disable of t1 2023-07-22 18:11:52,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-22 18:11:52,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-22 18:11:52,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-22 18:11:52,526 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049512526"}]},"ts":"1690049512526"} 2023-07-22 18:11:52,527 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-22 18:11:52,528 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-22 18:11:52,529 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=cbbeda9c18f1c46ddd38d458eae38699, UNASSIGN}] 2023-07-22 18:11:52,529 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=cbbeda9c18f1c46ddd38d458eae38699, UNASSIGN 2023-07-22 18:11:52,530 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=cbbeda9c18f1c46ddd38d458eae38699, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:52,530 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690049512530"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1690049512530"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1690049512530"}]},"ts":"1690049512530"} 2023-07-22 18:11:52,531 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure cbbeda9c18f1c46ddd38d458eae38699, server=jenkins-hbase4.apache.org,46437,1690049509307}] 2023-07-22 18:11:52,627 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-22 18:11:52,686 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cbbeda9c18f1c46ddd38d458eae38699 2023-07-22 18:11:52,686 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cbbeda9c18f1c46ddd38d458eae38699, disabling compactions & flushes 2023-07-22 18:11:52,686 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699. 2023-07-22 18:11:52,686 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699. 2023-07-22 18:11:52,686 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699. after waiting 0 ms 2023-07-22 18:11:52,686 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699. 2023-07-22 18:11:52,690 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/default/t1/cbbeda9c18f1c46ddd38d458eae38699/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-22 18:11:52,691 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699. 2023-07-22 18:11:52,691 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cbbeda9c18f1c46ddd38d458eae38699: 2023-07-22 18:11:52,692 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cbbeda9c18f1c46ddd38d458eae38699 2023-07-22 18:11:52,693 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=cbbeda9c18f1c46ddd38d458eae38699, regionState=CLOSED 2023-07-22 18:11:52,693 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1690049512693"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1690049512693"}]},"ts":"1690049512693"} 2023-07-22 18:11:52,695 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-22 18:11:52,695 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure cbbeda9c18f1c46ddd38d458eae38699, server=jenkins-hbase4.apache.org,46437,1690049509307 in 163 msec 2023-07-22 18:11:52,697 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-22 18:11:52,697 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=cbbeda9c18f1c46ddd38d458eae38699, UNASSIGN in 166 msec 2023-07-22 18:11:52,697 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1690049512697"}]},"ts":"1690049512697"} 2023-07-22 18:11:52,698 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-22 18:11:52,700 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-22 18:11:52,702 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 178 msec 2023-07-22 18:11:52,828 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-22 18:11:52,828 INFO [Listener at localhost/32999] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-22 18:11:52,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-22 18:11:52,830 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-22 18:11:52,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-22 18:11:52,832 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-22 18:11:52,834 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-22 18:11:52,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:52,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:52,836 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:52,838 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp/data/default/t1/cbbeda9c18f1c46ddd38d458eae38699 2023-07-22 18:11:52,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-22 18:11:52,840 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp/data/default/t1/cbbeda9c18f1c46ddd38d458eae38699/cf1, FileablePath, hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp/data/default/t1/cbbeda9c18f1c46ddd38d458eae38699/recovered.edits] 2023-07-22 18:11:52,847 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp/data/default/t1/cbbeda9c18f1c46ddd38d458eae38699/recovered.edits/4.seqid to hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/archive/data/default/t1/cbbeda9c18f1c46ddd38d458eae38699/recovered.edits/4.seqid 2023-07-22 18:11:52,848 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/.tmp/data/default/t1/cbbeda9c18f1c46ddd38d458eae38699 2023-07-22 18:11:52,848 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-22 18:11:52,851 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-22 18:11:52,853 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-22 18:11:52,855 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-22 18:11:52,858 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-22 18:11:52,859 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-22 18:11:52,859 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1690049512859"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:52,861 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-22 18:11:52,861 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => cbbeda9c18f1c46ddd38d458eae38699, NAME => 't1,,1690049511803.cbbeda9c18f1c46ddd38d458eae38699.', STARTKEY => '', ENDKEY => ''}] 2023-07-22 18:11:52,861 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-22 18:11:52,862 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1690049512861"}]},"ts":"9223372036854775807"} 2023-07-22 18:11:52,863 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-22 18:11:52,867 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-22 18:11:52,868 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 38 msec 2023-07-22 18:11:52,940 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-22 18:11:52,940 INFO [Listener at localhost/32999] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-22 18:11:52,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:52,943 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:52,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:52,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:52,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:52,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:52,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:52,945 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:52,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:52,949 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:52,952 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:52,955 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:52,955 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:52,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:52,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:52,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:52,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:52,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:52,964 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:52,966 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44283] to rsgroup master 2023-07-22 18:11:52,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:52,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:33458 deadline: 1690050712966, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. 2023-07-22 18:11:52,967 WARN [Listener at localhost/32999] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:52,971 INFO [Listener at localhost/32999] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:52,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:52,972 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:52,972 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33817, jenkins-hbase4.apache.org:38645, jenkins-hbase4.apache.org:38757, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:52,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:52,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:52,991 INFO [Listener at localhost/32999] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=569 (was 555) - Thread LEAK? -, OpenFileDescriptor=831 (was 813) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=394 (was 394), ProcessCount=172 (was 172), AvailableMemoryMB=8199 (was 8215) 2023-07-22 18:11:52,992 WARN [Listener at localhost/32999] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-22 18:11:53,010 INFO [Listener at localhost/32999] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=569, OpenFileDescriptor=831, MaxFileDescriptor=60000, SystemLoadAverage=394, ProcessCount=172, AvailableMemoryMB=8197 2023-07-22 18:11:53,010 WARN [Listener at localhost/32999] hbase.ResourceChecker(130): Thread=569 is superior to 500 2023-07-22 18:11:53,010 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-22 18:11:53,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,014 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:53,015 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:53,015 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:53,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:53,016 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:53,017 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:53,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:53,020 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:53,021 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:53,024 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:53,024 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:53,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:53,026 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:53,027 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:53,028 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:53,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,031 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,033 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44283] to rsgroup master 2023-07-22 18:11:53,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:53,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33458 deadline: 1690050713033, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. 2023-07-22 18:11:53,034 WARN [Listener at localhost/32999] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:53,036 INFO [Listener at localhost/32999] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:53,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,037 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33817, jenkins-hbase4.apache.org:38645, jenkins-hbase4.apache.org:38757, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:53,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:53,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:53,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-22 18:11:53,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 18:11:53,040 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-22 18:11:53,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-22 18:11:53,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-22 18:11:53,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:53,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:53,049 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:53,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:53,050 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:53,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:53,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:53,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:53,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:53,062 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:53,063 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:53,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:53,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:53,066 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:53,067 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:53,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,069 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,071 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44283] to rsgroup master 2023-07-22 18:11:53,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:53,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33458 deadline: 1690050713071, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. 2023-07-22 18:11:53,072 WARN [Listener at localhost/32999] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:53,073 INFO [Listener at localhost/32999] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:53,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,074 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,074 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33817, jenkins-hbase4.apache.org:38645, jenkins-hbase4.apache.org:38757, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:53,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:53,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:53,095 INFO [Listener at localhost/32999] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=571 (was 569) - Thread LEAK? -, OpenFileDescriptor=831 (was 831), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=394 (was 394), ProcessCount=172 (was 172), AvailableMemoryMB=8195 (was 8197) 2023-07-22 18:11:53,095 WARN [Listener at localhost/32999] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-22 18:11:53,113 INFO [Listener at localhost/32999] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=571, OpenFileDescriptor=831, MaxFileDescriptor=60000, SystemLoadAverage=394, ProcessCount=172, AvailableMemoryMB=8194 2023-07-22 18:11:53,113 WARN [Listener at localhost/32999] hbase.ResourceChecker(130): Thread=571 is superior to 500 2023-07-22 18:11:53,113 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-22 18:11:53,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,118 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:53,119 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:53,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:53,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:53,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:53,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:53,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:53,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:53,126 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:53,129 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:53,129 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:53,131 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:53,132 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:53,134 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:53,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:53,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44283] to rsgroup master 2023-07-22 18:11:53,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:53,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33458 deadline: 1690050713139, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. 2023-07-22 18:11:53,140 WARN [Listener at localhost/32999] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:53,142 INFO [Listener at localhost/32999] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:53,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,142 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,143 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33817, jenkins-hbase4.apache.org:38645, jenkins-hbase4.apache.org:38757, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:53,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:53,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:53,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,147 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:53,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:53,148 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:53,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:53,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:53,149 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:53,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:53,153 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:53,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:53,159 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:53,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:53,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:53,162 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:53,164 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:53,166 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:53,167 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,168 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44283] to rsgroup master 2023-07-22 18:11:53,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:53,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33458 deadline: 1690050713169, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. 2023-07-22 18:11:53,170 WARN [Listener at localhost/32999] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:53,171 INFO [Listener at localhost/32999] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:53,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,172 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,172 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33817, jenkins-hbase4.apache.org:38645, jenkins-hbase4.apache.org:38757, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:53,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:53,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:53,192 INFO [Listener at localhost/32999] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=572 (was 571) - Thread LEAK? -, OpenFileDescriptor=831 (was 831), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=394 (was 394), ProcessCount=172 (was 172), AvailableMemoryMB=8192 (was 8194) 2023-07-22 18:11:53,192 WARN [Listener at localhost/32999] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-22 18:11:53,213 INFO [Listener at localhost/32999] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=572, OpenFileDescriptor=831, MaxFileDescriptor=60000, SystemLoadAverage=394, ProcessCount=172, AvailableMemoryMB=8191 2023-07-22 18:11:53,213 WARN [Listener at localhost/32999] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-22 18:11:53,213 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-22 18:11:53,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,217 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:53,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:53,218 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:53,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:53,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:53,219 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:53,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:53,222 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:53,224 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:53,226 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:53,227 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:53,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:53,229 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:53,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:53,232 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:53,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,234 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,235 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44283] to rsgroup master 2023-07-22 18:11:53,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:53,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33458 deadline: 1690050713235, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. 2023-07-22 18:11:53,236 WARN [Listener at localhost/32999] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:53,238 INFO [Listener at localhost/32999] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:53,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,238 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,238 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33817, jenkins-hbase4.apache.org:38645, jenkins-hbase4.apache.org:38757, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:53,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:53,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:53,239 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-22 18:11:53,240 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-22 18:11:53,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-22 18:11:53,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:53,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:53,243 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-22 18:11:53,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:53,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-22 18:11:53,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-22 18:11:53,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-22 18:11:53,256 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 18:11:53,259 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-22 18:11:53,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-22 18:11:53,354 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-22 18:11:53,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:53,356 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:33458 deadline: 1690050713354, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-22 18:11:53,362 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-22 18:11:53,369 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-22 18:11:53,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-22 18:11:53,378 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-22 18:11:53,379 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 15 msec 2023-07-22 18:11:53,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-22 18:11:53,478 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-22 18:11:53,483 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-22 18:11:53,484 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:53,485 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-22 18:11:53,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:53,486 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-22 18:11:53,492 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:53,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,494 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,498 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-22 18:11:53,499 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-22 18:11:53,501 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-22 18:11:53,504 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-22 18:11:53,504 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-22 18:11:53,506 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-22 18:11:53,507 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-22 18:11:53,507 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-22 18:11:53,508 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-22 18:11:53,510 INFO [PEWorker-1] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-22 18:11:53,512 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 12 msec 2023-07-22 18:11:53,605 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-22 18:11:53,606 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-22 18:11:53,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-22 18:11:53,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:53,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:53,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-22 18:11:53,612 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:53,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,614 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:53,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:33458 deadline: 1690049573616, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-22 18:11:53,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:53,621 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:53,621 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:53,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:53,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:53,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-22 18:11:53,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:53,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:53,626 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-22 18:11:53,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:53,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-22 18:11:53,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-22 18:11:53,628 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-22 18:11:53,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-22 18:11:53,629 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-22 18:11:53,630 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-22 18:11:53,632 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:53,633 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-22 18:11:53,635 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-22 18:11:53,638 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-22 18:11:53,638 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-22 18:11:53,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-22 18:11:53,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-22 18:11:53,644 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-22 18:11:53,646 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-22 18:11:53,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,648 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:44283] to rsgroup master 2023-07-22 18:11:53,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-22 18:11:53,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:33458 deadline: 1690050713649, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. 2023-07-22 18:11:53,650 WARN [Listener at localhost/32999] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:44283 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-22 18:11:53,652 INFO [Listener at localhost/32999] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-22 18:11:53,652 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-22 18:11:53,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-22 18:11:53,653 INFO [Listener at localhost/32999] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33817, jenkins-hbase4.apache.org:38645, jenkins-hbase4.apache.org:38757, jenkins-hbase4.apache.org:46437], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-22 18:11:53,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-22 18:11:53,653 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44283] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-22 18:11:53,675 INFO [Listener at localhost/32999] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=572 (was 572), OpenFileDescriptor=831 (was 831), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=394 (was 394), ProcessCount=172 (was 172), AvailableMemoryMB=8160 (was 8191) 2023-07-22 18:11:53,675 WARN [Listener at localhost/32999] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-22 18:11:53,675 INFO [Listener at localhost/32999] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-22 18:11:53,676 INFO [Listener at localhost/32999] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-22 18:11:53,676 DEBUG [Listener at localhost/32999] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2ad0ad4c to 127.0.0.1:64378 2023-07-22 18:11:53,676 DEBUG [Listener at localhost/32999] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:53,676 DEBUG [Listener at localhost/32999] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-22 18:11:53,676 DEBUG [Listener at localhost/32999] util.JVMClusterUtil(257): Found active master hash=876733000, stopped=false 2023-07-22 18:11:53,676 DEBUG [Listener at localhost/32999] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-22 18:11:53,676 DEBUG [Listener at localhost/32999] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-22 18:11:53,676 INFO [Listener at localhost/32999] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,44283,1690049508986 2023-07-22 18:11:53,678 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:33817-0x1018e3b796b000b, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:53,678 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:53,678 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:53,678 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:53,678 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:53,678 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-22 18:11:53,678 INFO [Listener at localhost/32999] procedure2.ProcedureExecutor(629): Stopping 2023-07-22 18:11:53,679 DEBUG [Listener at localhost/32999] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x51aab894 to 127.0.0.1:64378 2023-07-22 18:11:53,679 DEBUG [Listener at localhost/32999] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:53,679 INFO [Listener at localhost/32999] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38645,1690049509158' ***** 2023-07-22 18:11:53,679 INFO [Listener at localhost/32999] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 18:11:53,679 INFO [Listener at localhost/32999] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46437,1690049509307' ***** 2023-07-22 18:11:53,679 INFO [Listener at localhost/32999] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 18:11:53,680 INFO [Listener at localhost/32999] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,38757,1690049509462' ***** 2023-07-22 18:11:53,680 INFO [Listener at localhost/32999] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 18:11:53,680 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 18:11:53,680 INFO [RS:0;jenkins-hbase4:38645] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 18:11:53,680 INFO [Listener at localhost/32999] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33817,1690049511469' ***** 2023-07-22 18:11:53,681 INFO [RS:2;jenkins-hbase4:38757] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 18:11:53,681 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:53,683 INFO [Listener at localhost/32999] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-22 18:11:53,688 INFO [RS:3;jenkins-hbase4:33817] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 18:11:53,688 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:53,688 INFO [RS:1;jenkins-hbase4:46437] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@50144ee6{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:53,688 INFO [RS:2;jenkins-hbase4:38757] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@54a596b9{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:53,689 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:53,689 INFO [RS:0;jenkins-hbase4:38645] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4c6e43be{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:53,689 INFO [RS:2;jenkins-hbase4:38757] server.AbstractConnector(383): Stopped ServerConnector@12826d0e{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 18:11:53,689 INFO [RS:2;jenkins-hbase4:38757] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 18:11:53,689 INFO [RS:0;jenkins-hbase4:38645] server.AbstractConnector(383): Stopped ServerConnector@25e91bdf{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 18:11:53,689 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33817-0x1018e3b796b000b, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:53,689 INFO [RS:0;jenkins-hbase4:38645] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 18:11:53,689 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:53,689 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-22 18:11:53,690 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 18:11:53,691 INFO [RS:0;jenkins-hbase4:38645] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@782404d1{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 18:11:53,691 INFO [RS:1;jenkins-hbase4:46437] server.AbstractConnector(383): Stopped ServerConnector@46096509{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 18:11:53,693 INFO [RS:0;jenkins-hbase4:38645] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@25e1627f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/hadoop.log.dir/,STOPPED} 2023-07-22 18:11:53,691 INFO [RS:2;jenkins-hbase4:38757] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@5c6daf69{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 18:11:53,693 INFO [RS:1;jenkins-hbase4:46437] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 18:11:53,694 INFO [RS:2;jenkins-hbase4:38757] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1d10ced3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/hadoop.log.dir/,STOPPED} 2023-07-22 18:11:53,695 INFO [RS:1;jenkins-hbase4:46437] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1989f106{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 18:11:53,696 INFO [RS:1;jenkins-hbase4:46437] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@69348f08{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/hadoop.log.dir/,STOPPED} 2023-07-22 18:11:53,696 INFO [RS:0;jenkins-hbase4:38645] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 18:11:53,696 INFO [RS:0;jenkins-hbase4:38645] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 18:11:53,696 INFO [RS:3;jenkins-hbase4:33817] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@66a7c817{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-22 18:11:53,696 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 18:11:53,696 INFO [RS:0;jenkins-hbase4:38645] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 18:11:53,696 INFO [RS:2;jenkins-hbase4:38757] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 18:11:53,696 INFO [RS:0;jenkins-hbase4:38645] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38645,1690049509158 2023-07-22 18:11:53,696 INFO [RS:2;jenkins-hbase4:38757] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 18:11:53,696 DEBUG [RS:0;jenkins-hbase4:38645] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x665b7fd1 to 127.0.0.1:64378 2023-07-22 18:11:53,697 INFO [RS:3;jenkins-hbase4:33817] server.AbstractConnector(383): Stopped ServerConnector@43995fad{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 18:11:53,696 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 18:11:53,697 INFO [RS:1;jenkins-hbase4:46437] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 18:11:53,697 INFO [RS:3;jenkins-hbase4:33817] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 18:11:53,697 DEBUG [RS:0;jenkins-hbase4:38645] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:53,696 INFO [RS:2;jenkins-hbase4:38757] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 18:11:53,698 INFO [RS:2;jenkins-hbase4:38757] regionserver.HRegionServer(3305): Received CLOSE for 3266efdc48b1d617bfe5b06d6aa0ae7d 2023-07-22 18:11:53,698 INFO [RS:3;jenkins-hbase4:33817] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@10a95ce4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 18:11:53,698 INFO [RS:2;jenkins-hbase4:38757] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,38757,1690049509462 2023-07-22 18:11:53,698 INFO [RS:0;jenkins-hbase4:38645] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38645,1690049509158; all regions closed. 2023-07-22 18:11:53,697 INFO [RS:1;jenkins-hbase4:46437] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 18:11:53,699 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3266efdc48b1d617bfe5b06d6aa0ae7d, disabling compactions & flushes 2023-07-22 18:11:53,699 INFO [RS:1;jenkins-hbase4:46437] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 18:11:53,699 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d. 2023-07-22 18:11:53,699 INFO [RS:3;jenkins-hbase4:33817] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@74695aa5{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/hadoop.log.dir/,STOPPED} 2023-07-22 18:11:53,699 DEBUG [RS:2;jenkins-hbase4:38757] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x1caaede6 to 127.0.0.1:64378 2023-07-22 18:11:53,699 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d. 2023-07-22 18:11:53,699 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(3305): Received CLOSE for 9f5da76f167889ae53c9bd8b306448b6 2023-07-22 18:11:53,699 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d. after waiting 0 ms 2023-07-22 18:11:53,699 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:53,699 DEBUG [RS:2;jenkins-hbase4:38757] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:53,700 DEBUG [RS:1;jenkins-hbase4:46437] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6da1db12 to 127.0.0.1:64378 2023-07-22 18:11:53,700 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9f5da76f167889ae53c9bd8b306448b6, disabling compactions & flushes 2023-07-22 18:11:53,699 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d. 2023-07-22 18:11:53,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6. 2023-07-22 18:11:53,700 DEBUG [RS:1;jenkins-hbase4:46437] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:53,700 INFO [RS:2;jenkins-hbase4:38757] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 18:11:53,700 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-22 18:11:53,700 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 3266efdc48b1d617bfe5b06d6aa0ae7d 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-22 18:11:53,700 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6. 2023-07-22 18:11:53,700 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1478): Online Regions={9f5da76f167889ae53c9bd8b306448b6=hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6.} 2023-07-22 18:11:53,700 INFO [RS:2;jenkins-hbase4:38757] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 18:11:53,700 INFO [RS:2;jenkins-hbase4:38757] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 18:11:53,700 DEBUG [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1504): Waiting on 9f5da76f167889ae53c9bd8b306448b6 2023-07-22 18:11:53,701 INFO [RS:3;jenkins-hbase4:33817] regionserver.HeapMemoryManager(220): Stopping 2023-07-22 18:11:53,701 INFO [RS:3;jenkins-hbase4:33817] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-22 18:11:53,701 INFO [RS:3;jenkins-hbase4:33817] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-22 18:11:53,701 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-22 18:11:53,700 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6. after waiting 0 ms 2023-07-22 18:11:53,701 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6. 2023-07-22 18:11:53,701 INFO [RS:3;jenkins-hbase4:33817] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33817,1690049511469 2023-07-22 18:11:53,700 INFO [RS:2;jenkins-hbase4:38757] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-22 18:11:53,702 DEBUG [RS:3;jenkins-hbase4:33817] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x3efa07c0 to 127.0.0.1:64378 2023-07-22 18:11:53,702 DEBUG [RS:3;jenkins-hbase4:33817] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:53,701 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 9f5da76f167889ae53c9bd8b306448b6 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-22 18:11:53,701 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:53,702 INFO [RS:2;jenkins-hbase4:38757] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-22 18:11:53,702 DEBUG [RS:2;jenkins-hbase4:38757] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 3266efdc48b1d617bfe5b06d6aa0ae7d=hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d.} 2023-07-22 18:11:53,702 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-22 18:11:53,702 INFO [RS:3;jenkins-hbase4:33817] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33817,1690049511469; all regions closed. 2023-07-22 18:11:53,702 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-22 18:11:53,702 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-22 18:11:53,702 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-22 18:11:53,702 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-22 18:11:53,702 DEBUG [RS:2;jenkins-hbase4:38757] regionserver.HRegionServer(1504): Waiting on 1588230740, 3266efdc48b1d617bfe5b06d6aa0ae7d 2023-07-22 18:11:53,702 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-22 18:11:53,714 DEBUG [RS:0;jenkins-hbase4:38645] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/oldWALs 2023-07-22 18:11:53,714 INFO [RS:0;jenkins-hbase4:38645] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38645%2C1690049509158:(num 1690049510176) 2023-07-22 18:11:53,714 DEBUG [RS:0;jenkins-hbase4:38645] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:53,714 INFO [RS:0;jenkins-hbase4:38645] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:53,714 INFO [RS:0;jenkins-hbase4:38645] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-22 18:11:53,715 INFO [RS:0;jenkins-hbase4:38645] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 18:11:53,715 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 18:11:53,715 INFO [RS:0;jenkins-hbase4:38645] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 18:11:53,715 INFO [RS:0;jenkins-hbase4:38645] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 18:11:53,716 INFO [RS:0;jenkins-hbase4:38645] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38645 2023-07-22 18:11:53,722 DEBUG [RS:3;jenkins-hbase4:33817] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/oldWALs 2023-07-22 18:11:53,722 INFO [RS:3;jenkins-hbase4:33817] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33817%2C1690049511469:(num 1690049511788) 2023-07-22 18:11:53,722 DEBUG [RS:3;jenkins-hbase4:33817] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:53,723 INFO [RS:3;jenkins-hbase4:33817] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:53,727 INFO [RS:3;jenkins-hbase4:33817] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-22 18:11:53,731 INFO [RS:3;jenkins-hbase4:33817] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 18:11:53,731 INFO [RS:3;jenkins-hbase4:33817] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 18:11:53,731 INFO [RS:3;jenkins-hbase4:33817] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 18:11:53,731 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 18:11:53,737 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/rsgroup/3266efdc48b1d617bfe5b06d6aa0ae7d/.tmp/m/138ab223545a4d5b89b53e944bb2a268 2023-07-22 18:11:53,738 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/.tmp/info/e8d082afc17c4ba387bec577675e57da 2023-07-22 18:11:53,740 INFO [RS:3;jenkins-hbase4:33817] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33817 2023-07-22 18:11:53,748 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 138ab223545a4d5b89b53e944bb2a268 2023-07-22 18:11:53,748 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/rsgroup/3266efdc48b1d617bfe5b06d6aa0ae7d/.tmp/m/138ab223545a4d5b89b53e944bb2a268 as hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/rsgroup/3266efdc48b1d617bfe5b06d6aa0ae7d/m/138ab223545a4d5b89b53e944bb2a268 2023-07-22 18:11:53,751 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/namespace/9f5da76f167889ae53c9bd8b306448b6/.tmp/info/747ca7dac2184632b45c9cd7ff7387fb 2023-07-22 18:11:53,752 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e8d082afc17c4ba387bec577675e57da 2023-07-22 18:11:53,754 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 138ab223545a4d5b89b53e944bb2a268 2023-07-22 18:11:53,755 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/rsgroup/3266efdc48b1d617bfe5b06d6aa0ae7d/m/138ab223545a4d5b89b53e944bb2a268, entries=12, sequenceid=29, filesize=5.4 K 2023-07-22 18:11:53,760 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for 3266efdc48b1d617bfe5b06d6aa0ae7d in 60ms, sequenceid=29, compaction requested=false 2023-07-22 18:11:53,761 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 747ca7dac2184632b45c9cd7ff7387fb 2023-07-22 18:11:53,762 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/namespace/9f5da76f167889ae53c9bd8b306448b6/.tmp/info/747ca7dac2184632b45c9cd7ff7387fb as hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/namespace/9f5da76f167889ae53c9bd8b306448b6/info/747ca7dac2184632b45c9cd7ff7387fb 2023-07-22 18:11:53,765 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:53,765 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:53,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/rsgroup/3266efdc48b1d617bfe5b06d6aa0ae7d/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-22 18:11:53,773 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 18:11:53,774 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d. 2023-07-22 18:11:53,774 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3266efdc48b1d617bfe5b06d6aa0ae7d: 2023-07-22 18:11:53,774 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1690049510881.3266efdc48b1d617bfe5b06d6aa0ae7d. 2023-07-22 18:11:53,774 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 747ca7dac2184632b45c9cd7ff7387fb 2023-07-22 18:11:53,774 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/namespace/9f5da76f167889ae53c9bd8b306448b6/info/747ca7dac2184632b45c9cd7ff7387fb, entries=3, sequenceid=9, filesize=5.0 K 2023-07-22 18:11:53,775 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 9f5da76f167889ae53c9bd8b306448b6 in 74ms, sequenceid=9, compaction requested=false 2023-07-22 18:11:53,776 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/.tmp/rep_barrier/8b17277136c142daa320a3ee75a8c225 2023-07-22 18:11:53,803 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33817,1690049511469 2023-07-22 18:11:53,804 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:53,804 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33817,1690049511469 2023-07-22 18:11:53,804 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33817,1690049511469 2023-07-22 18:11:53,804 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38645,1690049509158 2023-07-22 18:11:53,804 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:53,804 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:33817-0x1018e3b796b000b, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33817,1690049511469 2023-07-22 18:11:53,804 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:53,804 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:53,804 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38645,1690049509158 2023-07-22 18:11:53,804 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:33817-0x1018e3b796b000b, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:53,804 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:33817-0x1018e3b796b000b, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38645,1690049509158 2023-07-22 18:11:53,804 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38645,1690049509158 2023-07-22 18:11:53,808 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8b17277136c142daa320a3ee75a8c225 2023-07-22 18:11:53,809 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/namespace/9f5da76f167889ae53c9bd8b306448b6/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-22 18:11:53,810 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6. 2023-07-22 18:11:53,810 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9f5da76f167889ae53c9bd8b306448b6: 2023-07-22 18:11:53,810 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1690049510456.9f5da76f167889ae53c9bd8b306448b6. 2023-07-22 18:11:53,826 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/.tmp/table/6286139fc64c4377ba03080c536aac42 2023-07-22 18:11:53,833 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6286139fc64c4377ba03080c536aac42 2023-07-22 18:11:53,834 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/.tmp/info/e8d082afc17c4ba387bec577675e57da as hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/info/e8d082afc17c4ba387bec577675e57da 2023-07-22 18:11:53,842 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for e8d082afc17c4ba387bec577675e57da 2023-07-22 18:11:53,842 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/info/e8d082afc17c4ba387bec577675e57da, entries=22, sequenceid=26, filesize=7.3 K 2023-07-22 18:11:53,843 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/.tmp/rep_barrier/8b17277136c142daa320a3ee75a8c225 as hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/rep_barrier/8b17277136c142daa320a3ee75a8c225 2023-07-22 18:11:53,848 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8b17277136c142daa320a3ee75a8c225 2023-07-22 18:11:53,848 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/rep_barrier/8b17277136c142daa320a3ee75a8c225, entries=1, sequenceid=26, filesize=4.9 K 2023-07-22 18:11:53,849 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/.tmp/table/6286139fc64c4377ba03080c536aac42 as hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/table/6286139fc64c4377ba03080c536aac42 2023-07-22 18:11:53,855 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 6286139fc64c4377ba03080c536aac42 2023-07-22 18:11:53,855 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/table/6286139fc64c4377ba03080c536aac42, entries=6, sequenceid=26, filesize=5.1 K 2023-07-22 18:11:53,856 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 154ms, sequenceid=26, compaction requested=false 2023-07-22 18:11:53,866 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-22 18:11:53,867 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-22 18:11:53,867 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-22 18:11:53,867 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-22 18:11:53,867 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-22 18:11:53,900 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46437,1690049509307; all regions closed. 2023-07-22 18:11:53,902 INFO [RS:2;jenkins-hbase4:38757] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,38757,1690049509462; all regions closed. 2023-07-22 18:11:53,904 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38645,1690049509158] 2023-07-22 18:11:53,904 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38645,1690049509158; numProcessing=1 2023-07-22 18:11:53,907 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38645,1690049509158 already deleted, retry=false 2023-07-22 18:11:53,907 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38645,1690049509158 expired; onlineServers=3 2023-07-22 18:11:53,907 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33817,1690049511469] 2023-07-22 18:11:53,907 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33817,1690049511469; numProcessing=2 2023-07-22 18:11:53,908 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33817,1690049511469 already deleted, retry=false 2023-07-22 18:11:53,908 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33817,1690049511469 expired; onlineServers=2 2023-07-22 18:11:53,909 DEBUG [RS:1;jenkins-hbase4:46437] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/oldWALs 2023-07-22 18:11:53,909 INFO [RS:1;jenkins-hbase4:46437] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46437%2C1690049509307:(num 1690049510176) 2023-07-22 18:11:53,909 DEBUG [RS:1;jenkins-hbase4:46437] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:53,909 INFO [RS:1;jenkins-hbase4:46437] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:53,909 DEBUG [RS:2;jenkins-hbase4:38757] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/oldWALs 2023-07-22 18:11:53,909 INFO [RS:1;jenkins-hbase4:46437] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-22 18:11:53,909 INFO [RS:2;jenkins-hbase4:38757] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38757%2C1690049509462.meta:.meta(num 1690049510401) 2023-07-22 18:11:53,909 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 18:11:53,909 INFO [RS:1;jenkins-hbase4:46437] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-22 18:11:53,909 INFO [RS:1;jenkins-hbase4:46437] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-22 18:11:53,909 INFO [RS:1;jenkins-hbase4:46437] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-22 18:11:53,911 INFO [RS:1;jenkins-hbase4:46437] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46437 2023-07-22 18:11:53,914 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:53,914 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:53,914 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46437,1690049509307 2023-07-22 18:11:53,916 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46437,1690049509307] 2023-07-22 18:11:53,916 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46437,1690049509307; numProcessing=3 2023-07-22 18:11:53,916 DEBUG [RS:2;jenkins-hbase4:38757] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/oldWALs 2023-07-22 18:11:53,916 INFO [RS:2;jenkins-hbase4:38757] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C38757%2C1690049509462:(num 1690049510198) 2023-07-22 18:11:53,916 DEBUG [RS:2;jenkins-hbase4:38757] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:53,917 INFO [RS:2;jenkins-hbase4:38757] regionserver.LeaseManager(133): Closed leases 2023-07-22 18:11:53,917 INFO [RS:2;jenkins-hbase4:38757] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-22 18:11:53,917 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 18:11:53,918 INFO [RS:2;jenkins-hbase4:38757] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:38757 2023-07-22 18:11:54,016 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:54,016 INFO [RS:1;jenkins-hbase4:46437] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46437,1690049509307; zookeeper connection closed. 2023-07-22 18:11:54,016 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:46437-0x1018e3b796b0002, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:54,016 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@eb16f5c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@eb16f5c 2023-07-22 18:11:54,018 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,38757,1690049509462 2023-07-22 18:11:54,018 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-22 18:11:54,018 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46437,1690049509307 already deleted, retry=false 2023-07-22 18:11:54,018 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46437,1690049509307 expired; onlineServers=1 2023-07-22 18:11:54,019 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,38757,1690049509462] 2023-07-22 18:11:54,019 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,38757,1690049509462; numProcessing=4 2023-07-22 18:11:54,020 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,38757,1690049509462 already deleted, retry=false 2023-07-22 18:11:54,020 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,38757,1690049509462 expired; onlineServers=0 2023-07-22 18:11:54,020 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44283,1690049508986' ***** 2023-07-22 18:11:54,020 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-22 18:11:54,021 DEBUG [M:0;jenkins-hbase4:44283] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@307e6018, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-22 18:11:54,021 INFO [M:0;jenkins-hbase4:44283] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-22 18:11:54,023 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-22 18:11:54,024 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-22 18:11:54,024 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-22 18:11:54,024 INFO [M:0;jenkins-hbase4:44283] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@63887653{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-22 18:11:54,024 INFO [M:0;jenkins-hbase4:44283] server.AbstractConnector(383): Stopped ServerConnector@79271683{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 18:11:54,024 INFO [M:0;jenkins-hbase4:44283] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-22 18:11:54,025 INFO [M:0;jenkins-hbase4:44283] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@64e35faa{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-22 18:11:54,025 INFO [M:0;jenkins-hbase4:44283] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4e484fe9{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/hadoop.log.dir/,STOPPED} 2023-07-22 18:11:54,026 INFO [M:0;jenkins-hbase4:44283] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44283,1690049508986 2023-07-22 18:11:54,026 INFO [M:0;jenkins-hbase4:44283] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44283,1690049508986; all regions closed. 2023-07-22 18:11:54,026 DEBUG [M:0;jenkins-hbase4:44283] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-22 18:11:54,026 INFO [M:0;jenkins-hbase4:44283] master.HMaster(1491): Stopping master jetty server 2023-07-22 18:11:54,027 INFO [M:0;jenkins-hbase4:44283] server.AbstractConnector(383): Stopped ServerConnector@5e47ccd2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-22 18:11:54,027 DEBUG [M:0;jenkins-hbase4:44283] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-22 18:11:54,027 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-22 18:11:54,027 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690049509799] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1690049509799,5,FailOnTimeoutGroup] 2023-07-22 18:11:54,027 DEBUG [M:0;jenkins-hbase4:44283] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-22 18:11:54,027 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690049509794] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1690049509794,5,FailOnTimeoutGroup] 2023-07-22 18:11:54,027 INFO [M:0;jenkins-hbase4:44283] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-22 18:11:54,027 INFO [M:0;jenkins-hbase4:44283] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-22 18:11:54,027 INFO [M:0;jenkins-hbase4:44283] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-22 18:11:54,027 DEBUG [M:0;jenkins-hbase4:44283] master.HMaster(1512): Stopping service threads 2023-07-22 18:11:54,028 INFO [M:0;jenkins-hbase4:44283] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-22 18:11:54,028 ERROR [M:0;jenkins-hbase4:44283] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-22 18:11:54,028 INFO [M:0;jenkins-hbase4:44283] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-22 18:11:54,028 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-22 18:11:54,028 DEBUG [M:0;jenkins-hbase4:44283] zookeeper.ZKUtil(398): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-22 18:11:54,028 WARN [M:0;jenkins-hbase4:44283] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-22 18:11:54,028 INFO [M:0;jenkins-hbase4:44283] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-22 18:11:54,028 INFO [M:0;jenkins-hbase4:44283] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-22 18:11:54,028 DEBUG [M:0;jenkins-hbase4:44283] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-22 18:11:54,028 INFO [M:0;jenkins-hbase4:44283] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:54,029 DEBUG [M:0;jenkins-hbase4:44283] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:54,029 DEBUG [M:0;jenkins-hbase4:44283] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-22 18:11:54,029 DEBUG [M:0;jenkins-hbase4:44283] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:54,029 INFO [M:0;jenkins-hbase4:44283] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.21 KB heapSize=90.66 KB 2023-07-22 18:11:54,042 INFO [M:0;jenkins-hbase4:44283] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.21 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/36f9cd92de9847ba8a3fd291a4ffaead 2023-07-22 18:11:54,048 DEBUG [M:0;jenkins-hbase4:44283] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/36f9cd92de9847ba8a3fd291a4ffaead as hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/36f9cd92de9847ba8a3fd291a4ffaead 2023-07-22 18:11:54,053 INFO [M:0;jenkins-hbase4:44283] regionserver.HStore(1080): Added hdfs://localhost:35975/user/jenkins/test-data/ee709936-a4e0-f4af-b2b7-a5a28a53919e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/36f9cd92de9847ba8a3fd291a4ffaead, entries=22, sequenceid=175, filesize=11.1 K 2023-07-22 18:11:54,053 INFO [M:0;jenkins-hbase4:44283] regionserver.HRegion(2948): Finished flush of dataSize ~76.21 KB/78044, heapSize ~90.65 KB/92824, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 24ms, sequenceid=175, compaction requested=false 2023-07-22 18:11:54,055 INFO [M:0;jenkins-hbase4:44283] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-22 18:11:54,055 DEBUG [M:0;jenkins-hbase4:44283] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-22 18:11:54,059 INFO [M:0;jenkins-hbase4:44283] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-22 18:11:54,059 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-22 18:11:54,060 INFO [M:0;jenkins-hbase4:44283] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44283 2023-07-22 18:11:54,062 DEBUG [M:0;jenkins-hbase4:44283] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,44283,1690049508986 already deleted, retry=false 2023-07-22 18:11:54,194 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:54,194 INFO [M:0;jenkins-hbase4:44283] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44283,1690049508986; zookeeper connection closed. 2023-07-22 18:11:54,195 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): master:44283-0x1018e3b796b0000, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:54,295 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:54,295 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38757-0x1018e3b796b0003, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:54,295 INFO [RS:2;jenkins-hbase4:38757] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38757,1690049509462; zookeeper connection closed. 2023-07-22 18:11:54,295 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7211247c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7211247c 2023-07-22 18:11:54,395 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:33817-0x1018e3b796b000b, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:54,395 INFO [RS:3;jenkins-hbase4:33817] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33817,1690049511469; zookeeper connection closed. 2023-07-22 18:11:54,395 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:33817-0x1018e3b796b000b, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:54,395 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6df5a3d9] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6df5a3d9 2023-07-22 18:11:54,495 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:54,495 INFO [RS:0;jenkins-hbase4:38645] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,38645,1690049509158; zookeeper connection closed. 2023-07-22 18:11:54,495 DEBUG [Listener at localhost/32999-EventThread] zookeeper.ZKWatcher(600): regionserver:38645-0x1018e3b796b0001, quorum=127.0.0.1:64378, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-22 18:11:54,496 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@5c5c0722] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@5c5c0722 2023-07-22 18:11:54,496 INFO [Listener at localhost/32999] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-22 18:11:54,496 WARN [Listener at localhost/32999] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-22 18:11:54,499 INFO [Listener at localhost/32999] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 18:11:54,603 WARN [BP-1266385466-172.31.14.131-1690049508270 heartbeating to localhost/127.0.0.1:35975] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-22 18:11:54,603 WARN [BP-1266385466-172.31.14.131-1690049508270 heartbeating to localhost/127.0.0.1:35975] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1266385466-172.31.14.131-1690049508270 (Datanode Uuid c945822f-8771-4245-9867-0c708d2a36c4) service to localhost/127.0.0.1:35975 2023-07-22 18:11:54,604 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/dfs/data/data5/current/BP-1266385466-172.31.14.131-1690049508270] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 18:11:54,604 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/dfs/data/data6/current/BP-1266385466-172.31.14.131-1690049508270] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 18:11:54,605 WARN [Listener at localhost/32999] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-22 18:11:54,608 INFO [Listener at localhost/32999] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 18:11:54,711 WARN [BP-1266385466-172.31.14.131-1690049508270 heartbeating to localhost/127.0.0.1:35975] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-22 18:11:54,711 WARN [BP-1266385466-172.31.14.131-1690049508270 heartbeating to localhost/127.0.0.1:35975] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1266385466-172.31.14.131-1690049508270 (Datanode Uuid bcbfa0d3-c776-46ee-9f2e-48b9bbb352cb) service to localhost/127.0.0.1:35975 2023-07-22 18:11:54,712 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/dfs/data/data3/current/BP-1266385466-172.31.14.131-1690049508270] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 18:11:54,712 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/dfs/data/data4/current/BP-1266385466-172.31.14.131-1690049508270] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 18:11:54,713 WARN [Listener at localhost/32999] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-22 18:11:54,718 INFO [Listener at localhost/32999] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 18:11:54,822 WARN [BP-1266385466-172.31.14.131-1690049508270 heartbeating to localhost/127.0.0.1:35975] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-22 18:11:54,822 WARN [BP-1266385466-172.31.14.131-1690049508270 heartbeating to localhost/127.0.0.1:35975] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1266385466-172.31.14.131-1690049508270 (Datanode Uuid c1b456db-f2c0-4c0d-bd3d-e694123a8179) service to localhost/127.0.0.1:35975 2023-07-22 18:11:54,822 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/dfs/data/data1/current/BP-1266385466-172.31.14.131-1690049508270] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 18:11:54,823 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/c37f6385-6133-ecd2-576a-9a38636bad74/cluster_1f5f6e41-dcdc-3437-cae1-8a323fc265f7/dfs/data/data2/current/BP-1266385466-172.31.14.131-1690049508270] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-22 18:11:54,833 INFO [Listener at localhost/32999] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-22 18:11:54,946 INFO [Listener at localhost/32999] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-22 18:11:54,972 INFO [Listener at localhost/32999] hbase.HBaseTestingUtility(1293): Minicluster is down